WorldWideScience

Sample records for modeling approach ekma

  1. Environmental monitoring for EKMA modeling of Nashville, Tennessee and Louisville, Kentucky. Final report 1 Jul-19 Sep 81

    Energy Technology Data Exchange (ETDEWEB)

    Boyd, R.R.; Yawn, E.N.; Golaszewski, E.R.

    1981-11-01

    During the period July 1, 1981 through September 15, 1981, ambient air data collection was conducted in the greater Nashville, Tennessee and Louisville, Kentucky metropolitan areas. The data collected included nonmethane organic compounds (NMOC), CH4, NO, NOx, O3, wind direction, and wind speed, and are to be used in city-specific EKMA modeling of the ozone nonattainment areas that encompass these cities. The data were collected under an approved quality control plan with a quality assurance program that provided a quantitative assessment of the precision and accuracy of the validated data.

  2. Multiple Model Approaches to Modelling and Control,

    DEFF Research Database (Denmark)

    Why Multiple Models?This book presents a variety of approaches which produce complex models or controllers by piecing together a number of simpler subsystems. Thisdivide-and-conquer strategy is a long-standing and general way of copingwith complexity in engineering systems, nature and human probl...

  3. Multiple Model Approaches to Modelling and Control,

    DEFF Research Database (Denmark)

    on the ease with which prior knowledge can be incorporated. It is interesting to note that researchers in Control Theory, Neural Networks,Statistics, Artificial Intelligence and Fuzzy Logic have more or less independently developed very similar modelling methods, calling them Local ModelNetworks, Operating...... of introduction of existing knowledge, as well as the ease of model interpretation. This book attempts to outlinemuch of the common ground between the various approaches, encouraging the transfer of ideas.Recent progress in algorithms and analysis is presented, with constructive algorithms for automated model...

  4. Model Construct Based Enterprise Model Architecture and Its Modeling Approach

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In order to support enterprise integration, a kind of model construct based enterprise model architecture and its modeling approach are studied in this paper. First, the structural makeup and internal relationships of enterprise model architecture are discussed. Then, the concept of reusable model construct (MC) which belongs to the control view and can help to derive other views is proposed. The modeling approach based on model construct consists of three steps, reference model architecture synthesis, enterprise model customization, system design and implementation. According to MC based modeling approach a case study with the background of one-kind-product machinery manufacturing enterprises is illustrated. It is shown that proposal model construct based enterprise model architecture and modeling approach are practical and efficient.

  5. Hydraulic Modeling of Lock Approaches

    Science.gov (United States)

    2016-08-01

    cation was that the guidewall design changed from a solid wall to one on pilings in which water was allowed to flow through and/or under the wall ...develops innovative solutions in civil and military engineering, geospatial sciences, water resources, and environmental sciences for the Army, the...magnitudes and directions at lock approaches for open river conditions. The meshes were developed using the Surface- water Modeling System. The two

  6. LP Approach to Statistical Modeling

    OpenAIRE

    Mukhopadhyay, Subhadeep; Parzen, Emanuel

    2014-01-01

    We present an approach to statistical data modeling and exploratory data analysis called `LP Statistical Data Science.' It aims to generalize and unify traditional and novel statistical measures, methods, and exploratory tools. This article outlines fundamental concepts along with real-data examples to illustrate how the `LP Statistical Algorithm' can systematically tackle different varieties of data types, data patterns, and data structures under a coherent theoretical framework. A fundament...

  7. Approaches to Modeling of Recrystallization

    Directory of Open Access Journals (Sweden)

    Håkan Hallberg

    2011-10-01

    Full Text Available Control of the material microstructure in terms of the grain size is a key component in tailoring material properties of metals and alloys and in creating functionally graded materials. To exert this control, reliable and efficient modeling and simulation of the recrystallization process whereby the grain size evolves is vital. The present contribution is a review paper, summarizing the current status of various approaches to modeling grain refinement due to recrystallization. The underlying mechanisms of recrystallization are briefly recollected and different simulation methods are discussed. Analytical and empirical models, continuum mechanical models and discrete methods as well as phase field, vertex and level set models of recrystallization will be considered. Such numerical methods have been reviewed previously, but with the present focus on recrystallization modeling and with a rapidly increasing amount of related publications, an updated review is called for. Advantages and disadvantages of the different methods are discussed in terms of applicability, underlying assumptions, physical relevance, implementation issues and computational efficiency.

  8. Validation of Modeling Flow Approaching Navigation Locks

    Science.gov (United States)

    2013-08-01

    instrumentation, direction vernier . ........................................................................ 8  Figure 11. Plan A lock approach, upstream approach...13-9 8 Figure 9. Tools and instrumentation, bracket attached to rail. Figure 10. Tools and instrumentation, direction vernier . Numerical model

  9. Model Mapping Approach Based on Ontology Semantics

    Directory of Open Access Journals (Sweden)

    Jinkui Hou

    2013-09-01

    Full Text Available The mapping relations between different models are the foundation for model transformation in model-driven software development. On the basis of ontology semantics, model mappings between different levels are classified by using structural semantics of modeling languages. The general definition process for mapping relations is explored, and the principles of structure mapping are proposed subsequently. The approach is further illustrated by the mapping relations from class model of object oriented modeling language to the C programming codes. The application research shows that the approach provides a theoretical guidance for the realization of model mapping, and thus can make an effective support to model-driven software development

  10. Learning Action Models: Qualitative Approach

    NARCIS (Netherlands)

    Bolander, T.; Gierasimczuk, N.; van der Hoek, W.; Holliday, W.H.; Wang, W.-F.

    2015-01-01

    In dynamic epistemic logic, actions are described using action models. In this paper we introduce a framework for studying learnability of action models from observations. We present first results concerning propositional action models. First we check two basic learnability criteria: finite

  11. Learning Actions Models: Qualitative Approach

    DEFF Research Database (Denmark)

    Bolander, Thomas; Gierasimczuk, Nina

    2015-01-01

    —they are identifiable in the limit.We then move on to a particular learning method, which proceeds via restriction of a space of events within a learning-specific action model. This way of learning closely resembles the well-known update method from dynamic epistemic logic. We introduce several different learning......In dynamic epistemic logic, actions are described using action models. In this paper we introduce a framework for studying learnability of action models from observations. We present first results concerning propositional action models. First we check two basic learnability criteria: finite...... identifiability (conclusively inferring the appropriate action model in finite time) and identifiability in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while non-deterministic actions require more learning power...

  12. Geometrical approach to fluid models

    NARCIS (Netherlands)

    Kuvshinov, B. N.; Schep, T. J.

    1997-01-01

    Differential geometry based upon the Cartan calculus of differential forms is applied to investigate invariant properties of equations that describe the motion of continuous media. The main feature of this approach is that physical quantities are treated as geometrical objects. The geometrical

  13. Geometrical approach to fluid models

    NARCIS (Netherlands)

    Kuvshinov, B. N.; Schep, T. J.

    1997-01-01

    Differential geometry based upon the Cartan calculus of differential forms is applied to investigate invariant properties of equations that describe the motion of continuous media. The main feature of this approach is that physical quantities are treated as geometrical objects. The geometrical notio

  14. Model based feature fusion approach

    NARCIS (Netherlands)

    Schwering, P.B.W.

    2001-01-01

    In recent years different sensor data fusion approaches have been analyzed and evaluated in the field of mine detection. In various studies comparisons have been made between different techniques. Although claims can be made for advantages for using certain techniques, until now there has been no si

  15. Global energy modeling - A biophysical approach

    Energy Technology Data Exchange (ETDEWEB)

    Dale, Michael

    2010-09-15

    This paper contrasts the standard economic approach to energy modelling with energy models using a biophysical approach. Neither of these approaches includes changing energy-returns-on-investment (EROI) due to declining resource quality or the capital intensive nature of renewable energy sources. Both of these factors will become increasingly important in the future. An extension to the biophysical approach is outlined which encompasses a dynamic EROI function that explicitly incorporates technological learning. The model is used to explore several scenarios of long-term future energy supply especially concerning the global transition to renewable energy sources in the quest for a sustainable energy system.

  16. A POMDP approach to Affective Dialogue Modeling

    NARCIS (Netherlands)

    Bui Huu Trung, B.H.T.; Poel, Mannes; Nijholt, Antinus; Zwiers, Jakob; Keller, E.; Marinaro, M.; Bratanic, M.

    2007-01-01

    We propose a novel approach to developing a dialogue model that is able to take into account some aspects of the user's affective state and to act appropriately. Our dialogue model uses a Partially Observable Markov Decision Process approach with observations composed of the observed user's

  17. The chronic diseases modelling approach

    NARCIS (Netherlands)

    Hoogenveen RT; Hollander AEM de; Genugten MLL van; CCM

    1998-01-01

    A mathematical model structure is described that can be used to simulate the changes of the Dutch public health state over time. The model is based on the concept of demographic and epidemiologic processes (events) and is mathematically based on the lifetable method. The population is divided over s

  18. Learning Actions Models: Qualitative Approach

    DEFF Research Database (Denmark)

    Bolander, Thomas; Gierasimczuk, Nina

    2015-01-01

    identifiability (conclusively inferring the appropriate action model in finite time) and identifiability in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while non-deterministic actions require more learning power......—they are identifiable in the limit.We then move on to a particular learning method, which proceeds via restriction of a space of events within a learning-specific action model. This way of learning closely resembles the well-known update method from dynamic epistemic logic. We introduce several different learning...

  19. A Unified Approach to Modeling and Programming

    DEFF Research Database (Denmark)

    Madsen, Ole Lehrmann; Møller-Pedersen, Birger

    2010-01-01

    of this paper is to go back to the future and get inspiration from SIMULA and propose a unied approach. In addition to reintroducing the contributions of SIMULA and the Scandinavian approach to object-oriented programming, we do this by discussing a number of issues in modeling and programming and argue3 why we......SIMULA was a language for modeling and programming and provided a unied approach to modeling and programming in contrast to methodologies based on structured analysis and design. The current development seems to be going in the direction of separation of modeling and programming. The goal...

  20. Szekeres models: a covariant approach

    CERN Document Server

    Apostolopoulos, Pantelis S

    2016-01-01

    We exploit the 1+1+2 formalism to covariantly describe the inhomogeneous and anisotropic Szekeres models. It is shown that an \\emph{average scale length} can be defined \\emph{covariantly} which satisfies a 2d equation of motion driven from the \\emph{effective gravitational mass} (EGM) contained in the dust cloud. The contributions to the EGM are encoded to the energy density of the dust fluid and the free gravitational field $E_{ab}$. In addition the notions of the Apparent and Absolute Apparent Horizons are briefly discussed and we give an alternative gauge-invariant form to define them in terms of the kinematical variables of the spacelike congruences. We argue that the proposed program can be used in order to express the Sachs optical equations in a covariant form and analyze the confrontation of a spatially inhomogeneous irrotational overdense fluid model with the observational data.

  1. Matrix Model Approach to Cosmology

    CERN Document Server

    Chaney, A; Stern, A

    2015-01-01

    We perform a systematic search for rotationally invariant cosmological solutions to matrix models, or more specifically the bosonic sector of Lorentzian IKKT-type matrix models, in dimensions $d$ less than ten, specifically $d=3$ and $d=5$. After taking a continuum (or commutative) limit they yield $d-1$ dimensional space-time surfaces, with an attached Poisson structure, which can be associated with closed, open or static cosmologies. For $d=3$, we obtain recursion relations from which it is possible to generate rotationally invariant matrix solutions which yield open universes in the continuum limit. Specific examples of matrix solutions have also been found which are associated with closed and static two-dimensional space-times in the continuum limit. The solutions provide for a matrix resolution of cosmological singularities. The commutative limit reveals other desirable features, such as a solution describing a smooth transition from an initial inflation to a noninflationary era. Many of the $d=3$ soluti...

  2. A new approach to adaptive data models

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2016-12-01

    Full Text Available Over the last decade, there has been a substantial increase in the volume and complexity of data we collect, store and process. We are now aware of the increasing demand for real time data processing in every continuous business process that evolves within the organization. We witness a shift from a traditional static data approach to a more adaptive model approach. This article aims to extend understanding in the field of data models used in information systems by examining how an adaptive data model approach for managing business processes can help organizations accommodate on the fly and build dynamic capabilities to react in a dynamic environment.

  3. Modeling software behavior a craftsman's approach

    CERN Document Server

    Jorgensen, Paul C

    2009-01-01

    A common problem with most texts on requirements specifications is that they emphasize structural models to the near exclusion of behavioral models-focusing on what the software is, rather than what it does. If they do cover behavioral models, the coverage is brief and usually focused on a single model. Modeling Software Behavior: A Craftsman's Approach provides detailed treatment of various models of software behavior that support early analysis, comprehension, and model-based testing. Based on the popular and continually evolving course on requirements specification models taught by the auth

  4. Current approaches to gene regulatory network modelling

    Directory of Open Access Journals (Sweden)

    Brazma Alvis

    2007-09-01

    Full Text Available Abstract Many different approaches have been developed to model and simulate gene regulatory networks. We proposed the following categories for gene regulatory network models: network parts lists, network topology models, network control logic models, and dynamic models. Here we will describe some examples for each of these categories. We will study the topology of gene regulatory networks in yeast in more detail, comparing a direct network derived from transcription factor binding data and an indirect network derived from genome-wide expression data in mutants. Regarding the network dynamics we briefly describe discrete and continuous approaches to network modelling, then describe a hybrid model called Finite State Linear Model and demonstrate that some simple network dynamics can be simulated in this model.

  5. Model Oriented Approach for Industrial Software Development

    Directory of Open Access Journals (Sweden)

    P. D. Drobintsev

    2015-01-01

    Full Text Available The article considers the specifics of a model oriented approach to software development based on the usage of Model Driven Architecture (MDA, Model Driven Software Development (MDSD and Model Driven Development (MDD technologies. Benefits of this approach usage in the software development industry are described. The main emphasis is put on the system design, automated code generation for large systems, verification, proof of system properties and reduction of bug density. Drawbacks of the approach are also considered. The approach proposed in the article is specific for industrial software systems development. These systems are characterized by different levels of abstraction, which is used on modeling and code development phases. The approach allows to detail the model to the level of the system code, at the same time store the verified model semantics and provide the checking of the whole detailed model. Steps of translating abstract data structures (including transactions, signals and their parameters into data structures used in detailed system implementation are presented. Also the grammar of a language for specifying rules of abstract model data structures transformation into real system detailed data structures is described. The results of applying the proposed method in the industrial technology are shown.The article is published in the authors’ wording.

  6. Distributed simulation a model driven engineering approach

    CERN Document Server

    Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent

    2016-01-01

    Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.

  7. A Set Theoretical Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models ch...

  8. Modeling diffuse pollution with a distributed approach.

    Science.gov (United States)

    León, L F; Soulis, E D; Kouwen, N; Farquhar, G J

    2002-01-01

    The transferability of parameters for non-point source pollution models to other watersheds, especially those in remote areas without enough data for calibration, is a major problem in diffuse pollution modeling. A water quality component was developed for WATFLOOD (a flood forecast hydrological model) to deal with sediment and nutrient transport. The model uses a distributed group response unit approach for water quantity and quality modeling. Runoff, sediment yield and soluble nutrient concentrations are calculated separately for each land cover class, weighted by area and then routed downstream. The distributed approach for the water quality model for diffuse pollution in agricultural watersheds is described in this paper. Integrating the model with data extracted using GIS technology (Geographical Information Systems) for a local watershed, the model is calibrated for the hydrologic response and validated for the water quality component. With the connection to GIS and the group response unit approach used in this paper, model portability increases substantially, which will improve non-point source modeling at the watershed scale level.

  9. MODULAR APPROACH WITH ROUGH DECISION MODELS

    Directory of Open Access Journals (Sweden)

    Ahmed T. Shawky

    2012-09-01

    Full Text Available Decision models which adopt rough set theory have been used effectively in many real world applications.However, rough decision models suffer the high computational complexity when dealing with datasets ofhuge size. In this research we propose a new rough decision model that allows making decisions based onmodularity mechanism. According to the proposed approach, large-size datasets can be divided intoarbitrary moderate-size datasets, then a group of rough decision models can be built as separate decisionmodules. The overall model decision is computed as the consensus decision of all decision modulesthrough some aggregation technique. This approach provides a flexible and a quick way for extractingdecision rules of large size information tables using rough decision models.

  10. Modular Approach with Rough Decision Models

    Directory of Open Access Journals (Sweden)

    Ahmed T. Shawky

    2012-10-01

    Full Text Available Decision models which adopt rough set theory have been used effectively in many real world applications.However, rough decision models suffer the high computational complexity when dealing with datasets ofhuge size. In this research we propose a new rough decision model that allows making decisions based onmodularity mechanism. According to the proposed approach, large-size datasets can be divided intoarbitrary moderate-size datasets, then a group of rough decision models can be built as separate decisionmodules. The overall model decision is computed as the consensus decision of all decision modulesthrough some aggregation technique. This approach provides a flexible and a quick way for extractingdecision rules of large size information tables using rough decision models.

  11. Modeling approach suitable for energy system

    Energy Technology Data Exchange (ETDEWEB)

    Goetschel, D. V.

    1979-01-01

    Recently increased attention has been placed on optimization problems related to the determination and analysis of operating strategies for energy systems. Presented in this paper is a nonlinear model that can be used in the formulation of certain energy-conversion systems-modeling problems. The model lends itself nicely to solution approaches based on nonlinear-programming algorithms and, in particular, to those methods falling into the class of variable metric algorithms for nonlinearly constrained optimization.

  12. Stormwater infiltration trenches: a conceptual modelling approach.

    Science.gov (United States)

    Freni, Gabriele; Mannina, Giorgio; Viviani, Gaspare

    2009-01-01

    In recent years, limitations linked to traditional urban drainage schemes have been pointed out and new approaches are developing introducing more natural methods for retaining and/or disposing of stormwater. These mitigation measures are generally called Best Management Practices or Sustainable Urban Drainage System and they include practices such as infiltration and storage tanks in order to reduce the peak flow and retain part of the polluting components. The introduction of such practices in urban drainage systems entails an upgrade of existing modelling frameworks in order to evaluate their efficiency in mitigating the impact of urban drainage systems on receiving water bodies. While storage tank modelling approaches are quite well documented in literature, some gaps are still present about infiltration facilities mainly dependent on the complexity of the involved physical processes. In this study, a simplified conceptual modelling approach for the simulation of the infiltration trenches is presented. The model enables to assess the performance of infiltration trenches. The main goal is to develop a model that can be employed for the assessment of the mitigation efficiency of infiltration trenches in an integrated urban drainage context. Particular care was given to the simulation of infiltration structures considering the performance reduction due to clogging phenomena. The proposed model has been compared with other simplified modelling approaches and with a physically based model adopted as benchmark. The model performed better compared to other approaches considering both unclogged facilities and the effect of clogging. On the basis of a long-term simulation of six years of rain data, the performance and the effectiveness of an infiltration trench measure are assessed. The study confirmed the important role played by the clogging phenomenon on such infiltration structures.

  13. Challenges in structural approaches to cell modeling.

    Science.gov (United States)

    Im, Wonpil; Liang, Jie; Olson, Arthur; Zhou, Huan-Xiang; Vajda, Sandor; Vakser, Ilya A

    2016-07-31

    Computational modeling is essential for structural characterization of biomolecular mechanisms across the broad spectrum of scales. Adequate understanding of biomolecular mechanisms inherently involves our ability to model them. Structural modeling of individual biomolecules and their interactions has been rapidly progressing. However, in terms of the broader picture, the focus is shifting toward larger systems, up to the level of a cell. Such modeling involves a more dynamic and realistic representation of the interactomes in vivo, in a crowded cellular environment, as well as membranes and membrane proteins, and other cellular components. Structural modeling of a cell complements computational approaches to cellular mechanisms based on differential equations, graph models, and other techniques to model biological networks, imaging data, etc. Structural modeling along with other computational and experimental approaches will provide a fundamental understanding of life at the molecular level and lead to important applications to biology and medicine. A cross section of diverse approaches presented in this review illustrates the developing shift from the structural modeling of individual molecules to that of cell biology. Studies in several related areas are covered: biological networks; automated construction of three-dimensional cell models using experimental data; modeling of protein complexes; prediction of non-specific and transient protein interactions; thermodynamic and kinetic effects of crowding; cellular membrane modeling; and modeling of chromosomes. The review presents an expert opinion on the current state-of-the-art in these various aspects of structural modeling in cellular biology, and the prospects of future developments in this emerging field. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Building Water Models, A Different Approach

    CERN Document Server

    Izadi, Saeed; Onufriev, Alexey V

    2014-01-01

    Simplified, classical models of water are an integral part of atomistic molecular simulations, especially in biology and chemistry where hydration effects are critical. Yet, despite several decades of effort, these models are still far from perfect. Presented here is an alternative approach to constructing point charge water models - currently, the most commonly used type. In contrast to the conventional approach, we do not impose any geometry constraints on the model other than symmetry. Instead, we optimize the distribution of point charges to best describe the "electrostatics" of the water molecule, which is key to many unusual properties of liquid water. The search for the optimal charge distribution is performed in 2D parameter space of key lowest multipole moments of the model, to find best fit to a small set of bulk water properties at room temperature. A virtually exhaustive search is enabled via analytical equations that relate the charge distribution to the multipole moments. The resulting "optimal"...

  15. Towards new approaches in phenological modelling

    Science.gov (United States)

    Chmielewski, Frank-M.; Götz, Klaus-P.; Rawel, Harshard M.; Homann, Thomas

    2014-05-01

    Modelling of phenological stages is based on temperature sums for many decades, describing both the chilling and the forcing requirement of woody plants until the beginning of leafing or flowering. Parts of this approach go back to Reaumur (1735), who originally proposed the concept of growing degree-days. Now, there is a growing body of opinion that asks for new methods in phenological modelling and more in-depth studies on dormancy release of woody plants. This requirement is easily understandable if we consider the wide application of phenological models, which can even affect the results of climate models. To this day, in phenological models still a number of parameters need to be optimised on observations, although some basic physiological knowledge of the chilling and forcing requirement of plants is already considered in these approaches (semi-mechanistic models). Limiting, for a fundamental improvement of these models, is the lack of knowledge about the course of dormancy in woody plants, which cannot be directly observed and which is also insufficiently described in the literature. Modern metabolomic methods provide a solution for this problem and allow both, the validation of currently used phenological models as well as the development of mechanistic approaches. In order to develop this kind of models, changes of metabolites (concentration, temporal course) must be set in relation to the variability of environmental (steering) parameters (weather, day length, etc.). This necessarily requires multi-year (3-5 yr.) and high-resolution (weekly probes between autumn and spring) data. The feasibility of this approach has already been tested in a 3-year pilot-study on sweet cherries. Our suggested methodology is not only limited to the flowering of fruit trees, it can be also applied to tree species of the natural vegetation, where even greater deficits in phenological modelling exist.

  16. Modelling Coagulation Systems: A Stochastic Approach

    CERN Document Server

    Ryazanov, V V

    2011-01-01

    A general stochastic approach to the description of coagulating aerosol system is developed. As the object of description one can consider arbitrary mesoscopic values (number of aerosol clusters, their size etc). The birth-and-death formalism for a number of clusters can be regarded as a partial case of the generalized storage model. An application of the storage model to the number of monomers in a cluster is discussed.

  17. A Multiple Model Approach to Modeling Based on LPF Algorithm

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Input-output data fitting methods are often used for unknown-structure nonlinear system modeling. Based on model-on-demand tactics, a multiple model approach to modeling for nonlinear systems is presented. The basic idea is to find out, from vast historical system input-output data sets, some data sets matching with the current working point, then to develop a local model using Local Polynomial Fitting (LPF) algorithm. With the change of working points, multiple local models are built, which realize the exact modeling for the global system. By comparing to other methods, the simulation results show good performance for its simple, effective and reliable estimation.``

  18. Towards a Multiscale Approach to Cybersecurity Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Hogan, Emilie A.; Hui, Peter SY; Choudhury, Sutanay; Halappanavar, Mahantesh; Oler, Kiri J.; Joslyn, Cliff A.

    2013-11-12

    We propose a multiscale approach to modeling cyber networks, with the goal of capturing a view of the network and overall situational awareness with respect to a few key properties--- connectivity, distance, and centrality--- for a system under an active attack. We focus on theoretical and algorithmic foundations of multiscale graphs, coming from an algorithmic perspective, with the goal of modeling cyber system defense as a specific use case scenario. We first define a notion of \\emph{multiscale} graphs, in contrast with their well-studied single-scale counterparts. We develop multiscale analogs of paths and distance metrics. As a simple, motivating example of a common metric, we present a multiscale analog of the all-pairs shortest-path problem, along with a multiscale analog of a well-known algorithm which solves it. From a cyber defense perspective, this metric might be used to model the distance from an attacker's position in the network to a sensitive machine. In addition, we investigate probabilistic models of connectivity. These models exploit the hierarchy to quantify the likelihood that sensitive targets might be reachable from compromised nodes. We believe that our novel multiscale approach to modeling cyber-physical systems will advance several aspects of cyber defense, specifically allowing for a more efficient and agile approach to defending these systems.

  19. Post-16 Biology--Some Model Approaches?

    Science.gov (United States)

    Lock, Roger

    1997-01-01

    Outlines alternative approaches to the teaching of difficult concepts in A-level biology which may help student learning by making abstract ideas more concrete and accessible. Examples include models, posters, and poems for illustrating meiosis, mitosis, genetic mutations, and protein synthesis. (DDR)

  20. Decomposition approach to model smart suspension struts

    Science.gov (United States)

    Song, Xubin

    2008-10-01

    Model and simulation study is the starting point for engineering design and development, especially for developing vehicle control systems. This paper presents a methodology to build models for application of smart struts for vehicle suspension control development. The modeling approach is based on decomposition of the testing data. Per the strut functions, the data is dissected according to both control and physical variables. Then the data sets are characterized to represent different aspects of the strut working behaviors. Next different mathematical equations can be built and optimized to best fit the corresponding data sets, respectively. In this way, the model optimization can be facilitated in comparison to a traditional approach to find out a global optimum set of model parameters for a complicated nonlinear model from a series of testing data. Finally, two struts are introduced as examples for this modeling study: magneto-rheological (MR) dampers and compressible fluid (CF) based struts. The model validation shows that this methodology can truly capture macro-behaviors of these struts.

  1. Heat transfer modeling an inductive approach

    CERN Document Server

    Sidebotham, George

    2015-01-01

    This innovative text emphasizes a "less-is-more" approach to modeling complicated systems such as heat transfer by treating them first as "1-node lumped models" that yield simple closed-form solutions. The author develops numerical techniques for students to obtain more detail, but also trains them to use the techniques only when simpler approaches fail. Covering all essential methods offered in traditional texts, but with a different order, Professor Sidebotham stresses inductive thinking and problem solving as well as a constructive understanding of modern, computer-based practice. Readers learn to develop their own code in the context of the material, rather than just how to use packaged software, offering a deeper, intrinsic grasp behind models of heat transfer. Developed from over twenty-five years of lecture notes to teach students of mechanical and chemical engineering at The Cooper Union for the Advancement of Science and Art, the book is ideal for students and practitioners across engineering discipl...

  2. A Bayesian Shrinkage Approach for AMMI Models.

    Science.gov (United States)

    da Silva, Carlos Pereira; de Oliveira, Luciano Antonio; Nuvunga, Joel Jorge; Pamplona, Andrezza Kéllen Alves; Balestre, Marcio

    2015-01-01

    Linear-bilinear models, especially the additive main effects and multiplicative interaction (AMMI) model, are widely applicable to genotype-by-environment interaction (GEI) studies in plant breeding programs. These models allow a parsimonious modeling of GE interactions, retaining a small number of principal components in the analysis. However, one aspect of the AMMI model that is still debated is the selection criteria for determining the number of multiplicative terms required to describe the GE interaction pattern. Shrinkage estimators have been proposed as selection criteria for the GE interaction components. In this study, a Bayesian approach was combined with the AMMI model with shrinkage estimators for the principal components. A total of 55 maize genotypes were evaluated in nine different environments using a complete blocks design with three replicates. The results show that the traditional Bayesian AMMI model produces low shrinkage of singular values but avoids the usual pitfalls in determining the credible intervals in the biplot. On the other hand, Bayesian shrinkage AMMI models have difficulty with the credible interval for model parameters, but produce stronger shrinkage of the principal components, converging to GE matrices that have more shrinkage than those obtained using mixed models. This characteristic allowed more parsimonious models to be chosen, and resulted in models being selected that were similar to those obtained by the Cornelius F-test (α = 0.05) in traditional AMMI models and cross validation based on leave-one-out. This characteristic allowed more parsimonious models to be chosen and more GEI pattern retained on the first two components. The resulting model chosen by posterior distribution of singular value was also similar to those produced by the cross-validation approach in traditional AMMI models. Our method enables the estimation of credible interval for AMMI biplot plus the choice of AMMI model based on direct posterior

  3. A Bayesian Shrinkage Approach for AMMI Models.

    Directory of Open Access Journals (Sweden)

    Carlos Pereira da Silva

    Full Text Available Linear-bilinear models, especially the additive main effects and multiplicative interaction (AMMI model, are widely applicable to genotype-by-environment interaction (GEI studies in plant breeding programs. These models allow a parsimonious modeling of GE interactions, retaining a small number of principal components in the analysis. However, one aspect of the AMMI model that is still debated is the selection criteria for determining the number of multiplicative terms required to describe the GE interaction pattern. Shrinkage estimators have been proposed as selection criteria for the GE interaction components. In this study, a Bayesian approach was combined with the AMMI model with shrinkage estimators for the principal components. A total of 55 maize genotypes were evaluated in nine different environments using a complete blocks design with three replicates. The results show that the traditional Bayesian AMMI model produces low shrinkage of singular values but avoids the usual pitfalls in determining the credible intervals in the biplot. On the other hand, Bayesian shrinkage AMMI models have difficulty with the credible interval for model parameters, but produce stronger shrinkage of the principal components, converging to GE matrices that have more shrinkage than those obtained using mixed models. This characteristic allowed more parsimonious models to be chosen, and resulted in models being selected that were similar to those obtained by the Cornelius F-test (α = 0.05 in traditional AMMI models and cross validation based on leave-one-out. This characteristic allowed more parsimonious models to be chosen and more GEI pattern retained on the first two components. The resulting model chosen by posterior distribution of singular value was also similar to those produced by the cross-validation approach in traditional AMMI models. Our method enables the estimation of credible interval for AMMI biplot plus the choice of AMMI model based on direct

  4. Scientific Theories, Models and the Semantic Approach

    Directory of Open Access Journals (Sweden)

    Décio Krause

    2007-12-01

    Full Text Available According to the semantic view, a theory is characterized by a class of models. In this paper, we examine critically some of the assumptions that underlie this approach. First, we recall that models are models of something. Thus we cannot leave completely aside the axiomatization of the theories under consideration, nor can we ignore the metamathematics used to elaborate these models, for changes in the metamathematics often impose restrictions on the resulting models. Second, based on a parallel between van Fraassen’s modal interpretation of quantum mechanics and Skolem’s relativism regarding set-theoretic concepts, we introduce a distinction between relative and absolute concepts in the context of the models of a scientific theory. And we discuss the significance of that distinction. Finally, by focusing on contemporary particle physics, we raise the question: since there is no general accepted unification of the parts of the standard model (namely, QED and QCD, we have no theory, in the usual sense of the term. This poses a difficulty: if there is no theory, how can we speak of its models? What are the latter models of? We conclude by noting that it is unclear that the semantic view can be applied to contemporary physical theories.

  5. Multiscale Model Approach for Magnetization Dynamics Simulations

    CERN Document Server

    De Lucia, Andrea; Tretiakov, Oleg A; Kläui, Mathias

    2016-01-01

    Simulations of magnetization dynamics in a multiscale environment enable rapid evaluation of the Landau-Lifshitz-Gilbert equation in a mesoscopic sample with nanoscopic accuracy in areas where such accuracy is required. We have developed a multiscale magnetization dynamics simulation approach that can be applied to large systems with spin structures that vary locally on small length scales. To implement this, the conventional micromagnetic simulation framework has been expanded to include a multiscale solving routine. The software selectively simulates different regions of a ferromagnetic sample according to the spin structures located within in order to employ a suitable discretization and use either a micromagnetic or an atomistic model. To demonstrate the validity of the multiscale approach, we simulate the spin wave transmission across the regions simulated with the two different models and different discretizations. We find that the interface between the regions is fully transparent for spin waves with f...

  6. Continuum modeling an approach through practical examples

    CERN Document Server

    Muntean, Adrian

    2015-01-01

    This book develops continuum modeling skills and approaches the topic from three sides: (1) derivation of global integral laws together with the associated local differential equations, (2) design of constitutive laws and (3) modeling boundary processes. The focus of this presentation lies on many practical examples covering aspects such as coupled flow, diffusion and reaction in porous media or microwave heating of a pizza, as well as traffic issues in bacterial colonies and energy harvesting from geothermal wells. The target audience comprises primarily graduate students in pure and applied mathematics as well as working practitioners in engineering who are faced by nonstandard rheological topics like those typically arising in the food industry.

  7. A Multivariate Approach to Functional Neuro Modeling

    DEFF Research Database (Denmark)

    Mørch, Niels J.S.

    1998-01-01

    This Ph.D. thesis, A Multivariate Approach to Functional Neuro Modeling, deals with the analysis and modeling of data from functional neuro imaging experiments. A multivariate dataset description is provided which facilitates efficient representation of typical datasets and, more importantly...... and overall conditions governing the functional experiment, via associated micro- and macroscopic variables. The description facilitates an efficient microscopic re-representation, as well as a handle on the link between brain and behavior; the latter is achieved by hypothesizing variations in the micro...... a generalization theoretical framework centered around measures of model generalization error. - Only few, if any, examples of the application of generalization theory to functional neuro modeling currently exist in the literature. - Exemplification of the proposed generalization theoretical framework...

  8. Interfacial Fluid Mechanics A Mathematical Modeling Approach

    CERN Document Server

    Ajaev, Vladimir S

    2012-01-01

    Interfacial Fluid Mechanics: A Mathematical Modeling Approach provides an introduction to mathematical models of viscous flow used in rapidly developing fields of microfluidics and microscale heat transfer. The basic physical effects are first introduced in the context of simple configurations and their relative importance in typical microscale applications is discussed. Then,several configurations of importance to microfluidics, most notably thin films/droplets on substrates and confined bubbles, are discussed in detail.  Topics from current research on electrokinetic phenomena, liquid flow near structured solid surfaces, evaporation/condensation, and surfactant phenomena are discussed in the later chapters. This book also:  Discusses mathematical models in the context of actual applications such as electrowetting Includes unique material on fluid flow near structured surfaces and phase change phenomena Shows readers how to solve modeling problems related to microscale multiphase flows Interfacial Fluid Me...

  9. Systematic approach to MIS model creation

    Directory of Open Access Journals (Sweden)

    Macura Perica

    2004-01-01

    Full Text Available In this paper-work, by application of basic principles of general theory of system (systematic approach, we have formulated a model of marketing information system. Bases for research were basic characteristics of systematic approach and marketing system. Informational base for management of marketing system, i.e. marketing instruments was presented in a way that the most important information for decision making were listed per individual marketing mix instruments. In projected model of marketing information system, information listed in this way create a base for establishing of data bases, i.e. bases of information (data bases of: product, price, distribution, promotion. This paper-work gives basic preconditions for formulation and functioning of the model. Model was presented by explication of elements of its structure (environment, data bases operators, analysts of information system, decision makers - managers, i.e. input, process, output, feedback and relations between these elements which are necessary for its optimal functioning. Beside that, here are basic elements for implementation of the model into business system, as well as conditions for its efficient functioning and development.

  10. Regularization of turbulence - a comprehensive modeling approach

    Science.gov (United States)

    Geurts, B. J.

    2011-12-01

    Turbulence readily arises in numerous flows in nature and technology. The large number of degrees of freedom of turbulence poses serious challenges to numerical approaches aimed at simulating and controlling such flows. While the Navier-Stokes equations are commonly accepted to precisely describe fluid turbulence, alternative coarsened descriptions need to be developed to cope with the wide range of length and time scales. These coarsened descriptions are known as large-eddy simulations in which one aims to capture only the primary features of a flow, at considerably reduced computational effort. Such coarsening introduces a closure problem that requires additional phenomenological modeling. A systematic approach to the closure problem, know as regularization modeling, will be reviewed. Its application to multiphase turbulent will be illustrated in which a basic regularization principle is enforced to physically consistently approximate momentum and scalar transport. Examples of Leray and LANS-alpha regularization are discussed in some detail, as are compatible numerical strategies. We illustrate regularization modeling to turbulence under the influence of rotation and buoyancy and investigate the accuracy with which particle-laden flow can be represented. A discussion of the numerical and modeling errors incurred will be given on the basis of homogeneous isotropic turbulence.

  11. Merging Digital Surface Models Implementing Bayesian Approaches

    Science.gov (United States)

    Sadeq, H.; Drummond, J.; Li, Z.

    2016-06-01

    In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  12. MERGING DIGITAL SURFACE MODELS IMPLEMENTING BAYESIAN APPROACHES

    Directory of Open Access Journals (Sweden)

    H. Sadeq

    2016-06-01

    Full Text Available In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades. It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  13. A new approach for Bayesian model averaging

    Institute of Scientific and Technical Information of China (English)

    TIAN XiangJun; XIE ZhengHui; WANG AiHui; YANG XiaoChun

    2012-01-01

    Bayesian model averaging (BMA) is a recently proposed statistical method for calibrating forecast ensembles from numerical weather models.However,successful implementation of BMA requires accurate estimates of the weights and variances of the individual competing models in the ensemble.Two methods,namely the Expectation-Maximization (EM) and the Markov Chain Monte Carlo (MCMC) algorithms,are widely used for BMA model training.Both methods have their own respective strengths and weaknesses.In this paper,we first modify the BMA log-likelihood function with the aim of removing the additional limitation that requires that the BMA weights add to one,and then use a limited memory quasi-Newtonian algorithm for solving the nonlinear optimization problem,thereby formulating a new approach for BMA (referred to as BMA-BFGS).Several groups of multi-model soil moisture simulation experiments from three land surface models show that the performance of BMA-BFGS is similar to the MCMC method in terms of simulation accuracy,and that both are superior to the EM algorithm.On the other hand,the computational cost of the BMA-BFGS algorithm is substantially less than for MCMC and is almost equivalent to that for EM.

  14. AN AUTOMATIC APPROACH TO BOX & JENKINS MODELLING

    OpenAIRE

    MARCELO KRIEGER

    1983-01-01

    Apesar do reconhecimento amplo da qualidade das previsões obtidas na aplicação de um modelo ARIMA à previsão de séries temporais univariadas, seu uso tem permanecido restrito pela falta de procedimentos automáticos, computadorizados. Neste trabalho este problema é discutido e um algoritmo é proposto. Inspite of general recognition of the good forecasting ability of ARIMA models in predicting time series, this approach is not widely used because of the lack of ...

  15. Modeling in transport phenomena a conceptual approach

    CERN Document Server

    Tosun, Ismail

    2007-01-01

    Modeling in Transport Phenomena, Second Edition presents and clearly explains with example problems the basic concepts and their applications to fluid flow, heat transfer, mass transfer, chemical reaction engineering and thermodynamics. A balanced approach is presented between analysis and synthesis, students will understand how to use the solution in engineering analysis. Systematic derivations of the equations and the physical significance of each term are given in detail, for students to easily understand and follow up the material. There is a strong incentive in science and engineering to

  16. Modeling for fairness: A Rawlsian approach.

    Science.gov (United States)

    Diekmann, Sven; Zwart, Sjoerd D

    2014-06-01

    In this paper we introduce the overlapping design consensus for the construction of models in design and the related value judgments. The overlapping design consensus is inspired by Rawls' overlapping consensus. The overlapping design consensus is a well-informed, mutual agreement among all stakeholders based on fairness. Fairness is respected if all stakeholders' interests are given due and equal attention. For reaching such fair agreement, we apply Rawls' original position and reflective equilibrium to modeling. We argue that by striving for the original position, stakeholders expel invalid arguments, hierarchies, unwarranted beliefs, and bargaining effects from influencing the consensus. The reflective equilibrium requires that stakeholders' beliefs cohere with the final agreement and its justification. Therefore, the overlapping design consensus is not only an agreement to decisions, as most other stakeholder approaches, it is also an agreement to their justification and that this justification is consistent with each stakeholders' beliefs. For supporting fairness, we argue that fairness qualifies as a maxim in modeling. We furthermore distinguish values embedded in a model from values that are implied by its context of application. Finally, we conclude that for reaching an overlapping design consensus communication about properties of and values related to a model is required.

  17. Pedagogic process modeling: Humanistic-integrative approach

    Directory of Open Access Journals (Sweden)

    Boritko Nikolaj M.

    2007-01-01

    Full Text Available The paper deals with some current problems of modeling the dynamics of the subject-features development of the individual. The term "process" is considered in the context of the humanistic-integrative approach, in which the principles of self education are regarded as criteria for efficient pedagogic activity. Four basic characteristics of the pedagogic process are pointed out: intentionality reflects logicality and regularity of the development of the process; discreteness (stageability in dicates qualitative stages through which the pedagogic phenomenon passes; nonlinearity explains the crisis character of pedagogic processes and reveals inner factors of self-development; situationality requires a selection of pedagogic conditions in accordance with the inner factors, which would enable steering the pedagogic process. Offered are two steps for singling out a particular stage and the algorithm for developing an integrative model for it. The suggested conclusions might be of use for further theoretic research, analyses of educational practices and for realistic predicting of pedagogical phenomena. .

  18. Nuclear level density: Shell-model approach

    Science.gov (United States)

    Sen'kov, Roman; Zelevinsky, Vladimir

    2016-06-01

    Knowledge of the nuclear level density is necessary for understanding various reactions, including those in the stellar environment. Usually the combinatorics of a Fermi gas plus pairing is used for finding the level density. Recently a practical algorithm avoiding diagonalization of huge matrices was developed for calculating the density of many-body nuclear energy levels with certain quantum numbers for a full shell-model Hamiltonian. The underlying physics is that of quantum chaos and intrinsic thermalization in a closed system of interacting particles. We briefly explain this algorithm and, when possible, demonstrate the agreement of the results with those derived from exact diagonalization. The resulting level density is much smoother than that coming from conventional mean-field combinatorics. We study the role of various components of residual interactions in the process of thermalization, stressing the influence of incoherent collision-like processes. The shell-model results for the traditionally used parameters are also compared with standard phenomenological approaches.

  19. Modeling Social Annotation: a Bayesian Approach

    CERN Document Server

    Plangprasopchok, Anon

    2008-01-01

    Collaborative tagging systems, such as del.icio.us, CiteULike, and others, allow users to annotate objects, e.g., Web pages or scientific papers, with descriptive labels called tags. The social annotations, contributed by thousands of users, can potentially be used to infer categorical knowledge, classify documents or recommend new relevant information. Traditional text inference methods do not make best use of socially-generated data, since they do not take into account variations in individual users' perspectives and vocabulary. In a previous work, we introduced a simple probabilistic model that takes interests of individual annotators into account in order to find hidden topics of annotated objects. Unfortunately, our proposed approach had a number of shortcomings, including overfitting, local maxima and the requirement to specify values for some parameters. In this paper we address these shortcomings in two ways. First, we extend the model to a fully Bayesian framework. Second, we describe an infinite ver...

  20. Multicomponent Equilibrium Models for Testing Geothermometry Approaches

    Energy Technology Data Exchange (ETDEWEB)

    Carl D. Palmer; Robert W. Smith; Travis L. McLing

    2013-02-01

    Geothermometry is an important tool for estimating deep reservoir temperature from the geochemical composition of shallower and cooler waters. The underlying assumption of geothermometry is that the waters collected from shallow wells and seeps maintain a chemical signature that reflects equilibrium in the deeper reservoir. Many of the geothermometers used in practice are based on correlation between water temperatures and composition or using thermodynamic calculations based a subset (typically silica, cations or cation ratios) of the dissolved constituents. An alternative approach is to use complete water compositions and equilibrium geochemical modeling to calculate the degree of disequilibrium (saturation index) for large number of potential reservoir minerals as a function of temperature. We have constructed several “forward” geochemical models using The Geochemist’s Workbench to simulate the change in chemical composition of reservoir fluids as they migrate toward the surface. These models explicitly account for the formation (mass and composition) of a steam phase and equilibrium partitioning of volatile components (e.g., CO2, H2S, and H2) into the steam as a result of pressure decreases associated with upward fluid migration from depth. We use the synthetic data generated from these simulations to determine the advantages and limitations of various geothermometry and optimization approaches for estimating the likely conditions (e.g., temperature, pCO2) to which the water was exposed in the deep subsurface. We demonstrate the magnitude of errors that can result from boiling, loss of volatiles, and analytical error from sampling and instrumental analysis. The estimated reservoir temperatures for these scenarios are also compared to conventional geothermometers. These results can help improve estimation of geothermal resource temperature during exploration and early development.

  1. A semiparametric approach to physiological flow models.

    Science.gov (United States)

    Verotta, D; Sheiner, L B; Ebling, W F; Stanski, D R

    1989-08-01

    By regarding sampled tissues in a physiological model as linear subsystems, the usual advantages of flow models are preserved while mitigating two of their disadvantages, (i) the need for assumptions regarding intratissue kinetics, and (ii) the need to simultaneously fit data from several tissues. To apply the linear systems approach, both arterial blood and (interesting) tissue drug concentrations must be measured. The body is modeled as having an arterial compartment (A) distributing drug to different linear subsystems (tissues), connected in a specific way by blood flow. The response (CA, with dimensions of concentration) of A is measured. Tissues receive input from A (and optionally from other tissues), and send output to the outside or to other parts of the body. The response (CT, total amount of drug in the tissue (T) divided by the volume of T) from the T-th one, for example, of such tissues is also observed. From linear systems theory, CT can be expressed as the convolution of CA with a disposition function, F(t) (with dimensions 1/time). The function F(t) depends on the (unknown) structure of T, but has certain other constant properties: The integral integral infinity0 F(t) dt is the steady state ratio of CT to CA, and the point F(0) is the clearance rate of drug from A to T divided by the volume of T. A formula for the clearance rate of drug from T to outside T can be derived. To estimate F(t) empirically, and thus mitigate disadvantage (i), we suggest that, first, a nonparametric (or parametric) function be fitted to CA data yielding predicted values, CA, and, second, the convolution integral of CA with F(t) be fitted to CT data using a deconvolution method. By so doing, each tissue's data are analyzed separately, thus mitigating disadvantage (ii). A method for system simulation is also proposed. The results of applying the approach to simulated data and to real thiopental data are reported.

  2. Evaluating face trustworthiness: a model based approach.

    Science.gov (United States)

    Todorov, Alexander; Baron, Sean G; Oosterhof, Nikolaas N

    2008-06-01

    Judgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2). Although participants did not engage in explicit evaluation of the faces, the amygdala response changed as a function of face trustworthiness. An area in the right amygdala showed a negative linear response-as the untrustworthiness of faces increased so did the amygdala response. Areas in the left and right putamen, the latter area extended into the anterior insula, showed a similar negative linear response. The response in the left amygdala was quadratic--strongest for faces on both extremes of the trustworthiness dimension. The medial prefrontal cortex and precuneus also showed a quadratic response, but their response was strongest to faces in the middle range of the trustworthiness dimension.

  3. Approaches and models of intercultural education

    Directory of Open Access Journals (Sweden)

    Iván Manuel Sánchez Fontalvo

    2013-10-01

    Full Text Available Needed to be aware of the need to build an intercultural society, awareness must be assumed in all social spheres, where stands the role play education. A role of transcendental, since it must promote educational spaces to form people with virtues and powers that allow them to live together / as in multicultural contexts and social diversities (sometimes uneven in an increasingly globalized and interconnected world, and foster the development of feelings of civic belonging shared before the neighborhood, city, region and country, allowing them concern and critical judgement to marginalization, poverty, misery and inequitable distribution of wealth, causes of structural violence, but at the same time, wanting to work for the welfare and transformation of these scenarios. Since these budgets, it is important to know the approaches and models of intercultural education that have been developed so far, analysing their impact on the contexts educational where apply.   

  4. A Bayesian modeling approach for generalized semiparametric structural equation models.

    Science.gov (United States)

    Song, Xin-Yuan; Lu, Zhao-Hua; Cai, Jing-Heng; Ip, Edward Hak-Sing

    2013-10-01

    In behavioral, biomedical, and psychological studies, structural equation models (SEMs) have been widely used for assessing relationships between latent variables. Regression-type structural models based on parametric functions are often used for such purposes. In many applications, however, parametric SEMs are not adequate to capture subtle patterns in the functions over the entire range of the predictor variable. A different but equally important limitation of traditional parametric SEMs is that they are not designed to handle mixed data types-continuous, count, ordered, and unordered categorical. This paper develops a generalized semiparametric SEM that is able to handle mixed data types and to simultaneously model different functional relationships among latent variables. A structural equation of the proposed SEM is formulated using a series of unspecified smooth functions. The Bayesian P-splines approach and Markov chain Monte Carlo methods are developed to estimate the smooth functions and the unknown parameters. Moreover, we examine the relative benefits of semiparametric modeling over parametric modeling using a Bayesian model-comparison statistic, called the complete deviance information criterion (DIC). The performance of the developed methodology is evaluated using a simulation study. To illustrate the method, we used a data set derived from the National Longitudinal Survey of Youth.

  5. An integrated approach to permeability modeling using micro-models

    Energy Technology Data Exchange (ETDEWEB)

    Hosseini, A.H.; Leuangthong, O.; Deutsch, C.V. [Society of Petroleum Engineers, Canadian Section, Calgary, AB (Canada)]|[Alberta Univ., Edmonton, AB (Canada)

    2008-10-15

    An important factor in predicting the performance of steam assisted gravity drainage (SAGD) well pairs is the spatial distribution of permeability. Complications that make the inference of a reliable porosity-permeability relationship impossible include the presence of short-scale variability in sand/shale sequences; preferential sampling of core data; and uncertainty in upscaling parameters. Micro-modelling is a simple and effective method for overcoming these complications. This paper proposed a micro-modeling approach to account for sampling bias, small laminated features with high permeability contrast, and uncertainty in upscaling parameters. The paper described the steps and challenges of micro-modeling and discussed the construction of binary mixture geo-blocks; flow simulation and upscaling; extended power law formalism (EPLF); and the application of micro-modeling and EPLF. An extended power-law formalism to account for changes in clean sand permeability as a function of macroscopic shale content was also proposed and tested against flow simulation results. There was close agreement between the model and simulation results. The proposed methodology was also applied to build the porosity-permeability relationship for laminated and brecciated facies of McMurray oil sands. Experimental data was in good agreement with the experimental data. 8 refs., 17 figs.

  6. Development of a computationally efficient urban modeling approach

    DEFF Research Database (Denmark)

    Wolfs, Vincent; Murla, Damian; Ntegeka, Victor

    2016-01-01

    This paper presents a parsimonious and data-driven modelling approach to simulate urban floods. Flood levels simulated by detailed 1D-2D hydrodynamic models can be emulated using the presented conceptual modelling approach with a very short calculation time. In addition, the model detail can be a...

  7. Connectivity of channelized reservoirs: a modelling approach

    Energy Technology Data Exchange (ETDEWEB)

    Larue, David K. [ChevronTexaco, Bakersfield, CA (United States); Hovadik, Joseph [ChevronTexaco, San Ramon, CA (United States)

    2006-07-01

    Connectivity represents one of the fundamental properties of a reservoir that directly affects recovery. If a portion of the reservoir is not connected to a well, it cannot be drained. Geobody or sandbody connectivity is defined as the percentage of the reservoir that is connected, and reservoir connectivity is defined as the percentage of the reservoir that is connected to wells. Previous studies have mostly considered mathematical, physical and engineering aspects of connectivity. In the current study, the stratigraphy of connectivity is characterized using simple, 3D geostatistical models. Based on these modelling studies, stratigraphic connectivity is good, usually greater than 90%, if the net: gross ratio, or sand fraction, is greater than about 30%. At net: gross values less than 30%, there is a rapid diminishment of connectivity as a function of net: gross. This behaviour between net: gross and connectivity defines a characteristic 'S-curve', in which the connectivity is high for net: gross values above 30%, then diminishes rapidly and approaches 0. Well configuration factors that can influence reservoir connectivity are well density, well orientation (vertical or horizontal; horizontal parallel to channels or perpendicular) and length of completion zones. Reservoir connectivity as a function of net: gross can be improved by several factors: presence of overbank sandy facies, deposition of channels in a channel belt, deposition of channels with high width/thickness ratios, and deposition of channels during variable floodplain aggradation rates. Connectivity can be reduced substantially in two-dimensional reservoirs, in map view or in cross-section, by volume support effects and by stratigraphic heterogeneities. It is well known that in two dimensions, the cascade zone for the 'S-curve' of net: gross plotted against connectivity occurs at about 60% net: gross. Generalizing this knowledge, any time that a reservoir can be regarded as &apos

  8. Uncertainty in biology a computational modeling approach

    CERN Document Server

    Gomez-Cabrero, David

    2016-01-01

    Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies.  Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process.  This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples.  This book is intended for graduate stude...

  9. ALREST High Fidelity Modeling Program Approach

    Science.gov (United States)

    2011-05-18

    Gases and Mixtures of Redlich - Kwong and Peng- Robinson Fluids Assumed pdf Model based on k- ε-g Model in NASA/LaRc Vulcan code Level Set model...Potential Attractiveness Of Liquid Hydrocarbon Engines For Boost Applications • Propensity Of Hydrocarbon Engines For Combustion Instability • Air

  10. LEXICAL APPROACH IN TEACHING TURKISH: A COLLOCATIONAL STUDY MODEL

    National Research Council Canada - National Science Library

    Eser ÖRDEM

    2013-01-01

    Abstract This study intends to propose Lexical Approach (Lewis, 1998, 2002; Harwood, 2002) and a model for teaching Turkish as a foreign language so that this model can be used in classroom settings...

  11. A model-based multisensor data fusion knowledge management approach

    Science.gov (United States)

    Straub, Jeremy

    2014-06-01

    A variety of approaches exist for combining data from multiple sensors. The model-based approach combines data based on its support for or refutation of elements of the model which in turn can be used to evaluate an experimental thesis. This paper presents a collection of algorithms for mapping various types of sensor data onto a thesis-based model and evaluating the truth or falsity of the thesis, based on the model. The use of this approach for autonomously arriving at findings and for prioritizing data are considered. Techniques for updating the model (instead of arriving at a true/false assertion) are also discussed.

  12. Comparison of two novel approaches to model fibre reinforced concrete

    NARCIS (Netherlands)

    Radtke, F.K.F.; Simone, A.; Sluys, L.J.

    2009-01-01

    We present two approaches to model fibre reinforced concrete. In both approaches, discrete fibre distributions and the behaviour of the fibre-matrix interface are explicitly considered. One approach employs the reaction forces from fibre to matrix while the other is based on the partition of unity f

  13. Modelling the World Wool Market: A Hybrid Approach

    OpenAIRE

    2007-01-01

    We present a model of the world wool market that merges two modelling traditions: the partialequilibrium commodity-specific approach and the computable general-equilibrium approach. The model captures the multistage nature of the wool production system, and the heterogeneous nature of raw wool, processed wool and wool garments. It also captures the important wool producing and consuming regions of the world. We illustrate the utility of the model by estimating the effects of tariff barriers o...

  14. An algebraic approach to the Hubbard model

    CERN Document Server

    de Leeuw, Marius

    2015-01-01

    We study the algebraic structure of an integrable Hubbard-Shastry type lattice model associated with the centrally extended su(2|2) superalgebra. This superalgebra underlies Beisert's AdS/CFT worldsheet R-matrix and Shastry's R-matrix. The considered model specializes to the one-dimensional Hubbard model in a certain limit. We demonstrate that Yangian symmetries of the R-matrix specialize to the Yangian symmetry of the Hubbard model found by Korepin and Uglov. Moreover, we show that the Hubbard model Hamiltonian has an algebraic interpretation as the so-called secret symmetry. We also discuss Yangian symmetries of the A and B models introduced by Frolov and Quinn.

  15. Numerical modelling approach for mine backfill

    Indian Academy of Sciences (India)

    MUHAMMAD ZAKA EMAD

    2017-09-01

    Numerical modelling is broadly used for assessing complex scenarios in underground mines, including mining sequence and blast-induced vibrations from production blasting. Sublevel stoping mining methods with delayed backfill are extensively used to exploit steeply dipping ore bodies by Canadian hard-rockmetal mines. Mine backfill is an important constituent of mining process. Numerical modelling of mine backfill material needs special attention as the numerical model must behave realistically and in accordance with the site conditions. This paper discusses a numerical modelling strategy for modelling mine backfill material. Themodelling strategy is studied using a case study mine from Canadian mining industry. In the end, results of numerical model parametric study are shown and discussed.

  16. Regularization of turbulence - a comprehensive modeling approach

    NARCIS (Netherlands)

    Geurts, Bernard J.

    2011-01-01

    Turbulence readily arises in numerous flows in nature and technology. The large number of degrees of freedom of turbulence poses serious challenges to numerical approaches aimed at simulating and controlling such flows. While the Navier-Stokes equations are commonly accepted to precisely describe fl

  17. Measuring equilibrium models: a multivariate approach

    Directory of Open Access Journals (Sweden)

    Nadji RAHMANIA

    2011-04-01

    Full Text Available This paper presents a multivariate methodology for obtaining measures of unobserved macroeconomic variables. The used procedure is the multivariate Hodrick-Prescot which depends on smoothing param eters. The choice of these parameters is crucial. Our approach is based on consistent estimators of these parameters, depending only on the observed data.

  18. A graphical approach to analogue behavioural modelling

    OpenAIRE

    Moser, Vincent; Nussbaum, Pascal; Amann, Hans-Peter; Astier, Luc; Pellandini, Fausto

    2007-01-01

    In order to master the growing complexity of analogue electronic systems, modelling and simulation of analogue hardware at various levels is absolutely necessary. This paper presents an original modelling method based on the graphical description of analogue electronic functional blocks. This method is intended to be automated and integrated into a design framework: specialists create behavioural models of existing functional blocks, that can then be used through high-level selection and spec...

  19. A geometrical approach to structural change modeling

    OpenAIRE

    Stijepic, Denis

    2013-01-01

    We propose a model for studying the dynamics of economic structures. The model is based on qualitative information regarding structural dynamics, in particular, (a) the information on the geometrical properties of trajectories (and their domains) which are studied in structural change theory and (b) the empirical information from stylized facts of structural change. We show that structural change is path-dependent in this model and use this fact to restrict the number of future structural cha...

  20. Consumer preference models: fuzzy theory approach

    Science.gov (United States)

    Turksen, I. B.; Wilson, I. A.

    1993-12-01

    Consumer preference models are widely used in new product design, marketing management, pricing and market segmentation. The purpose of this article is to develop and test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation) and how much to make (market share prediction).

  1. A New Approach for Magneto-Static Hysteresis Behavioral Modeling

    DEFF Research Database (Denmark)

    Astorino, Antonio; Swaminathan, Madhavan; Antonini, Giulio

    2016-01-01

    In this paper, a new behavioral modeling approach for magneto-static hysteresis is presented. Many accurate models are currently available, but none of them seems to be able to correctly reproduce all the possible B-H paths with low computational cost. By contrast, the approach proposed...... achieved when comparing the measured and simulated results....

  2. Nucleon Spin Content in a Relativistic Quark Potential Model Approach

    Institute of Scientific and Technical Information of China (English)

    DONG YuBing; FENG QingGuo

    2002-01-01

    Based on a relativistic quark model approach with an effective potential U(r) = (ac/2)(1 + γ0)r2, the spin content of the nucleon is investigated. Pseudo-scalar interaction between quarks and Goldstone bosons is employed to calculate the couplings between the Goldstone bosons and the nucleon. Different approaches to deal with the center of mass correction in the relativistic quark potential model approach are discussed.

  3. A simple approach to modeling ductile failure.

    Energy Technology Data Exchange (ETDEWEB)

    Wellman, Gerald William

    2012-06-01

    Sandia National Laboratories has the need to predict the behavior of structures after the occurrence of an initial failure. In some cases determining the extent of failure, beyond initiation, is required, while in a few cases the initial failure is a design feature used to tailor the subsequent load paths. In either case, the ability to numerically simulate the initiation and propagation of failures is a highly desired capability. This document describes one approach to the simulation of failure initiation and propagation.

  4. An approach for activity-based DEVS model specification

    DEFF Research Database (Denmark)

    Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram

    2016-01-01

    activity-based behavior modeling of parallel DEVS atomic models. We consider UML activities and actions as fundamental units of behavior modeling, especially in the presence of recent advances in the UML 2.5 specifications. We describe in detail how to approach activity modeling with a set of elemental...

  5. Advanced language modeling approaches, case study: Expert search

    NARCIS (Netherlands)

    Hiemstra, Djoerd

    2008-01-01

    This tutorial gives a clear and detailed overview of advanced language modeling approaches and tools, including the use of document priors, translation models, relevance models, parsimonious models and expectation maximization training. Expert search will be used as a case study to explain the

  6. Challenges and opportunities for integrating lake ecosystem modelling approaches

    Science.gov (United States)

    Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.

    2010-01-01

    A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative

  7. Random matrix model approach to chiral symmetry

    CERN Document Server

    Verbaarschot, J J M

    1996-01-01

    We review the application of random matrix theory (RMT) to chiral symmetry in QCD. Starting from the general philosophy of RMT we introduce a chiral random matrix model with the global symmetries of QCD. Exact results are obtained for universal properties of the Dirac spectrum: i) finite volume corrections to valence quark mass dependence of the chiral condensate, and ii) microscopic fluctuations of Dirac spectra. Comparisons with lattice QCD simulations are made. Most notably, the variance of the number of levels in an interval containing $n$ levels on average is suppressed by a factor $(\\log n)/\\pi^2 n$. An extension of the random matrix model model to nonzero temperatures and chemical potential provides us with a schematic model of the chiral phase transition. In particular, this elucidates the nature of the quenched approximation at nonzero chemical potential.

  8. Machine Learning Approaches for Modeling Spammer Behavior

    CERN Document Server

    Islam, Md Saiful; Islam, Md Rafiqul

    2010-01-01

    Spam is commonly known as unsolicited or unwanted email messages in the Internet causing potential threat to Internet Security. Users spend a valuable amount of time deleting spam emails. More importantly, ever increasing spam emails occupy server storage space and consume network bandwidth. Keyword-based spam email filtering strategies will eventually be less successful to model spammer behavior as the spammer constantly changes their tricks to circumvent these filters. The evasive tactics that the spammer uses are patterns and these patterns can be modeled to combat spam. This paper investigates the possibilities of modeling spammer behavioral patterns by well-known classification algorithms such as Na\\"ive Bayesian classifier (Na\\"ive Bayes), Decision Tree Induction (DTI) and Support Vector Machines (SVMs). Preliminary experimental results demonstrate a promising detection rate of around 92%, which is considerably an enhancement of performance compared to similar spammer behavior modeling research.

  9. Infectious disease modeling a hybrid system approach

    CERN Document Server

    Liu, Xinzhi

    2017-01-01

    This volume presents infectious diseases modeled mathematically, taking seasonality and changes in population behavior into account, using a switched and hybrid systems framework. The scope of coverage includes background on mathematical epidemiology, including classical formulations and results; a motivation for seasonal effects and changes in population behavior, an investigation into term-time forced epidemic models with switching parameters, and a detailed account of several different control strategies. The main goal is to study these models theoretically and to establish conditions under which eradication or persistence of the disease is guaranteed. In doing so, the long-term behavior of the models is determined through mathematical techniques from switched systems theory. Numerical simulations are also given to augment and illustrate the theoretical results and to help study the efficacy of the control schemes.

  10. Second Quantization Approach to Stochastic Epidemic Models

    CERN Document Server

    Mondaini, Leonardo

    2015-01-01

    We show how the standard field theoretical language based on creation and annihilation operators may be used for a straightforward derivation of closed master equations describing the population dynamics of multivariate stochastic epidemic models. In order to do that, we introduce an SIR-inspired stochastic model for hepatitis C virus epidemic, from which we obtain the time evolution of the mean number of susceptible, infected, recovered and chronically infected individuals in a population whose total size is allowed to change.

  11. "Dispersion modeling approaches for near road | Science ...

    Science.gov (United States)

    Roadway design and roadside barriers can have significant effects on the dispersion of traffic-generated pollutants, especially in the near-road environment. Dispersion models that can accurately simulate these effects are needed to fully assess these impacts for a variety of applications. For example, such models can be useful for evaluating the mitigation potential of roadside barriers in reducing near-road exposures and their associated adverse health effects. Two databases, a tracer field study and a wind tunnel study, provide measurements used in the development and/or validation of algorithms to simulate dispersion in the presence of noise barriers. The tracer field study was performed in Idaho Falls, ID, USA with a 6-m noise barrier and a finite line source in a variety of atmospheric conditions. The second study was performed in the meteorological wind tunnel at the US EPA and simulated line sources at different distances from a model noise barrier to capture the effect on emissions from individual lanes of traffic. In both cases, velocity and concentration measurements characterized the effect of the barrier on dispersion.This paper presents comparisons with the two datasets of the barrier algorithms implemented in two different dispersion models: US EPA’s R-LINE (a research dispersion modelling tool under development by the US EPA’s Office of Research and Development) and CERC’s ADMS model (ADMS-Urban). In R-LINE the physical features reveal

  12. Flipped models in Trinification: A Comprehensive Approach

    CERN Document Server

    Rodríguez, Oscar; Ponce, William A; Rojas, Eduardo

    2016-01-01

    By considering the 3-3-1 and the left-right symmetric models as low energy effective theories of the trinification group, alternative versions of these models are found. The new neutral gauge bosons in the universal 3-3-1 model and its flipped versions are considered; also, the left-right symmetric model and the two flipped variants of it are also studied. For these models, the couplings of the $Z'$ bosons to the standard model fermions are reported. The explicit form of the null space of the vector boson mass matrix for an arbitrary Higgs tensor and gauge group is also presented. In the general framework of the trinification gauge group, and by using the LHC experimental results and EW precision data, limits on the $Z'$ mass and the mixing angle between $Z$ and the new gauge bosons $Z'$ are imposed. The general results call for very small mixing angles in the range $10^{-3}$ radians and $M_{Z'}$ > 2.5 TeV.

  13. Lightweight approach to model traceability in a CASE tool

    Science.gov (United States)

    Vileiniskis, Tomas; Skersys, Tomas; Pavalkis, Saulius; Butleris, Rimantas; Butkiene, Rita

    2017-07-01

    A term "model-driven" is not at all a new buzzword within the ranks of system development community. Nevertheless, the ever increasing complexity of model-driven approaches keeps fueling all kinds of discussions around this paradigm and pushes researchers forward to research and develop new and more effective ways to system development. With the increasing complexity, model traceability, and model management as a whole, becomes indispensable activities of model-driven system development process. The main goal of this paper is to present a conceptual design and implementation of a practical lightweight approach to model traceability in a CASE tool.

  14. Approaching models of nursing from a postmodernist perspective.

    Science.gov (United States)

    Lister, P

    1991-02-01

    This paper explores some questions about the use of models of nursing. These questions make various assumptions about the nature of models of nursing, in general and in particular. Underlying these assumptions are various philosophical positions which are explored through an introduction to postmodernist approaches in philosophical criticism. To illustrate these approaches, a critique of the Roper et al. model is developed, and more general attitudes towards models of nursing are examined. It is suggested that postmodernism offers a challenge to many of the assumptions implicit in models of nursing, and that a greater awareness of these assumptions should lead to nursing care being better informed where such models are in use.

  15. Manufacturing Excellence Approach to Business Performance Model

    Directory of Open Access Journals (Sweden)

    Jesus Cruz Alvarez

    2015-03-01

    Full Text Available Six Sigma, lean manufacturing, total quality management, quality control, and quality function deployment are the fundamental set of tools to enhance productivity in organizations. There is some research that outlines the benefit of each tool into a particular context of firm´s productivity, but not into a broader context of firm´s competitiveness that is achieved thru business performance. The aim of this theoretical research paper is to contribute to this mean and propose a manufacturing excellence approach that links productivity tools into a broader context of business performance.

  16. A Bayesian Model Committee Approach to Forecasting Global Solar Radiation

    CERN Document Server

    Lauret, Philippe; Muselli, Marc; David, Mathieu; Diagne, Hadja; Voyant, Cyril

    2012-01-01

    This paper proposes to use a rather new modelling approach in the realm of solar radiation forecasting. In this work, two forecasting models: Autoregressive Moving Average (ARMA) and Neural Network (NN) models are combined to form a model committee. The Bayesian inference is used to affect a probability to each model in the committee. Hence, each model's predictions are weighted by their respective probability. The models are fitted to one year of hourly Global Horizontal Irradiance (GHI) measurements. Another year (the test set) is used for making genuine one hour ahead (h+1) out-of-sample forecast comparisons. The proposed approach is benchmarked against the persistence model. The very first results show an improvement brought by this approach.

  17. MDA based-approach for UML Models Complete Comparison

    CERN Document Server

    Chaouni, Samia Benabdellah; Mouline, Salma

    2011-01-01

    If a modeling task is distributed, it will frequently be necessary to integrate models developed by different team members. Problems occur in the models integration step and particularly, in the comparison phase of the integration. This issue had been discussed in several domains and various models. However, previous approaches have not correctly handled the semantic comparison. In the current paper, we provide a MDA-based approach for models comparison which aims at comparing UML models. We develop an hybrid approach which takes into account syntactic, semantic and structural comparison aspects. For this purpose, we use the domain ontology as well as other resources such as dictionaries. We propose a decision support system which permits the user to validate (or not) correspondences extracted in the comparison phase. For implementation, we propose an extension of the generic correspondence metamodel AMW in order to transform UML models to the correspondence model.

  18. A consortium approach to glass furnace modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Chang, S.-L.; Golchert, B.; Petrick, M.

    1999-04-20

    Using computational fluid dynamics to model a glass furnace is a difficult task for any one glass company, laboratory, or university to accomplish. The task of building a computational model of the furnace requires knowledge and experience in modeling two dissimilar regimes (the combustion space and the liquid glass bath), along with the skill necessary to couple these two regimes. Also, a detailed set of experimental data is needed in order to evaluate the output of the code to ensure that the code is providing proper results. Since all these diverse skills are not present in any one research institution, a consortium was formed between Argonne National Laboratory, Purdue University, Mississippi State University, and five glass companies in order to marshal these skills into one three-year program. The objective of this program is to develop a fully coupled, validated simulation of a glass melting furnace that may be used by industry to optimize the performance of existing furnaces.

  19. Mixture modeling approach to flow cytometry data.

    Science.gov (United States)

    Boedigheimer, Michael J; Ferbas, John

    2008-05-01

    Flow Cytometry has become a mainstay technique for measuring fluorescent and physical attributes of single cells in a suspended mixture. These data are reduced during analysis using a manual or semiautomated process of gating. Despite the need to gate data for traditional analyses, it is well recognized that analyst-to-analyst variability can impact the dataset. Moreover, cells of interest can be inadvertently excluded from the gate, and relationships between collected variables may go unappreciated because they were not included in the original analysis plan. A multivariate non-gating technique was developed and implemented that accomplished the same goal as traditional gating while eliminating many weaknesses. The procedure was validated against traditional gating for analysis of circulating B cells in normal donors (n = 20) and persons with Systemic Lupus Erythematosus (n = 42). The method recapitulated relationships in the dataset while providing for an automated and objective assessment of the data. Flow cytometry analyses are amenable to automated analytical techniques that are not predicated on discrete operator-generated gates. Such alternative approaches can remove subjectivity in data analysis, improve efficiency and may ultimately enable construction of large bioinformatics data systems for more sophisticated approaches to hypothesis testing.

  20. BUSINESS MODEL IN ELECTRICITY INDUSTRY USING BUSINESS MODEL CANVAS APPROACH; THE CASE OF PT. XYZ

    National Research Council Canada - National Science Library

    Wicaksono, Achmad Arief; Syarief, Rizal; Suparno, Ono

    2017-01-01

    .... This study aims to identify company's business model using Business Model Canvas approach, formulate business development strategy alternatives, and determine the prioritized business development...

  1. "Dispersion modeling approaches for near road

    Science.gov (United States)

    Roadway design and roadside barriers can have significant effects on the dispersion of traffic-generated pollutants, especially in the near-road environment. Dispersion models that can accurately simulate these effects are needed to fully assess these impacts for a variety of app...

  2. and Models: A Self-Similar Approach

    Directory of Open Access Journals (Sweden)

    José Antonio Belinchón

    2013-01-01

    equations (FEs admit self-similar solutions. The methods employed allow us to obtain general results that are valid not only for the FRW metric, but also for all the Bianchi types as well as for the Kantowski-Sachs model (under the self-similarity hypothesis and the power-law hypothesis for the scale factors.

  3. Nonperturbative approach to the modified statistical model

    Energy Technology Data Exchange (ETDEWEB)

    Magdy, M.A.; Bekmezci, A.; Sever, R. [Middle East Technical Univ., Ankara (Turkey)

    1993-12-01

    The modified form of the statistical model is used without making any perturbation. The mass spectra of the lowest S, P and D levels of the (Q{bar Q}) and the non-self-conjugate (Q{bar q}) mesons are studied with the Song-Lin potential. The authors results are in good agreement with the experimental and theoretical findings.

  4. System Behavior Models: A Survey of Approaches

    Science.gov (United States)

    2016-06-01

    Mandana Vaziri, and Frank Tip. 2007. “Finding Bugs Efficiently with a SAT Solver.” In European Software Engineering Conference and the ACM SIGSOFT...Van Gorp. 2005. “A Taxonomy of Model Transformation.” Electronic Notes in Theoretical Computer Science 152: 125–142. Miyazawa, Alvaro, and Ana

  5. A moving approach for the Vector Hysteron Model

    Energy Technology Data Exchange (ETDEWEB)

    Cardelli, E. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Faba, A., E-mail: antonio.faba@unipg.it [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Laudani, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy); Quondam Antonio, S. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Riganti Fulginei, F.; Salvini, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy)

    2016-04-01

    A moving approach for the VHM (Vector Hysteron Model) is here described, to reconstruct both scalar and rotational magnetization of electrical steels with weak anisotropy, such as the non oriented grain Silicon steel. The hysterons distribution is postulated to be function of the magnetization state of the material, in order to overcome the practical limitation of the congruency property of the standard VHM approach. By using this formulation and a suitable accommodation procedure, the results obtained indicate that the model is accurate, in particular in reproducing the experimental behavior approaching to the saturation region, allowing a real improvement respect to the previous approach.

  6. Integration models: multicultural and liberal approaches confronted

    Science.gov (United States)

    Janicki, Wojciech

    2012-01-01

    European societies have been shaped by their Christian past, upsurge of international migration, democratic rule and liberal tradition rooted in religious tolerance. Boosting globalization processes impose new challenges on European societies, striving to protect their diversity. This struggle is especially clearly visible in case of minorities trying to resist melting into mainstream culture. European countries' legal systems and cultural policies respond to these efforts in many ways. Respecting identity politics-driven group rights seems to be the most common approach, resulting in creation of a multicultural society. However, the outcome of respecting group rights may be remarkably contradictory to both individual rights growing out from liberal tradition, and to reinforced concept of integration of immigrants into host societies. The hereby paper discusses identity politics upturn in the context of both individual rights and integration of European societies.

  7. ISM Approach to Model Offshore Outsourcing Risks

    Directory of Open Access Journals (Sweden)

    Sunand Kumar

    2014-07-01

    Full Text Available In an effort to achieve a competitive advantage via cost reductions and improved market responsiveness, organizations are increasingly employing offshore outsourcing as a major component of their supply chain strategies. But as evident from literature number of risks such as Political risk, Risk due to cultural differences, Compliance and regulatory risk, Opportunistic risk and Organization structural risk, which adversely affect the performance of offshore outsourcing in a supply chain network. This also leads to dissatisfaction among different stake holders. The main objective of this paper is to identify and understand the mutual interaction among various risks which affect the performance of offshore outsourcing.  To this effect, authors have identified various risks through extant review of literature.  From this information, an integrated model using interpretive structural modelling (ISM for risks affecting offshore outsourcing is developed and the structural relationships between these risks are modeled.  Further, MICMAC analysis is done to analyze the driving power and dependency of risks which shall be helpful to managers to identify and classify important criterions and to reveal the direct and indirect effects of each criterion on offshore outsourcing. Results show that political risk and risk due to cultural differences are act as strong drivers.

  8. Quantum Machine and SR Approach: a Unified Model

    CERN Document Server

    Garola, C; Sozzo, S; Garola, Claudio; Pykacz, Jaroslav; Sozzo, Sandro

    2005-01-01

    The Geneva-Brussels approach to quantum mechanics (QM) and the semantic realism (SR) nonstandard interpretation of QM exhibit some common features and some deep conceptual differences. We discuss in this paper two elementary models provided in the two approaches as intuitive supports to general reasonings and as a proof of consistency of general assumptions, and show that Aerts' quantum machine can be embodied into a macroscopic version of the microscopic SR model, overcoming the seeming incompatibility between the two models. This result provides some hints for the construction of a unified perspective in which the two approaches can be properly placed.

  9. Dynamic Metabolic Model Building Based on the Ensemble Modeling Approach

    Energy Technology Data Exchange (ETDEWEB)

    Liao, James C. [Univ. of California, Los Angeles, CA (United States)

    2016-10-01

    Ensemble modeling of kinetic systems addresses the challenges of kinetic model construction, with respect to parameter value selection, and still allows for the rich insights possible from kinetic models. This project aimed to show that constructing, implementing, and analyzing such models is a useful tool for the metabolic engineering toolkit, and that they can result in actionable insights from models. Key concepts are developed and deliverable publications and results are presented.

  10. A modular approach to numerical human body modeling

    NARCIS (Netherlands)

    Forbes, P.A.; Griotto, G.; Rooij, L. van

    2007-01-01

    The choice of a human body model for a simulated automotive impact scenario must take into account both accurate model response and computational efficiency as key factors. This study presents a "modular numerical human body modeling" approach which allows the creation of a customized human body mod

  11. A BEHAVIORAL-APPROACH TO LINEAR EXACT MODELING

    NARCIS (Netherlands)

    ANTOULAS, AC; WILLEMS, JC

    1993-01-01

    The behavioral approach to system theory provides a parameter-free framework for the study of the general problem of linear exact modeling and recursive modeling. The main contribution of this paper is the solution of the (continuous-time) polynomial-exponential time series modeling problem. Both re

  12. A modular approach to numerical human body modeling

    NARCIS (Netherlands)

    Forbes, P.A.; Griotto, G.; Rooij, L. van

    2007-01-01

    The choice of a human body model for a simulated automotive impact scenario must take into account both accurate model response and computational efficiency as key factors. This study presents a "modular numerical human body modeling" approach which allows the creation of a customized human body

  13. A market model for stochastic smile: a conditional density approach

    NARCIS (Netherlands)

    Zilber, A.

    2005-01-01

    The purpose of this paper is to introduce a new approach that allows to construct no-arbitrage market models of for implied volatility surfaces (in other words, stochastic smile models). That is to say, the idea presented here allows us to model prices of liquidly traded vanilla options as separate

  14. Thermoplasmonics modeling: A Green's function approach

    Science.gov (United States)

    Baffou, Guillaume; Quidant, Romain; Girard, Christian

    2010-10-01

    We extend the discrete dipole approximation (DDA) and the Green’s dyadic tensor (GDT) methods—previously dedicated to all-optical simulations—to investigate the thermodynamics of illuminated plasmonic nanostructures. This extension is based on the use of the thermal Green’s function and a original algorithm that we named Laplace matrix inversion. It allows for the computation of the steady-state temperature distribution throughout plasmonic systems. This hybrid photothermal numerical method is suited to investigate arbitrarily complex structures. It can take into account the presence of a dielectric planar substrate and is simple to implement in any DDA or GDT code. Using this numerical framework, different applications are discussed such as thermal collective effects in nanoparticles assembly, the influence of a substrate on the temperature distribution and the heat generation in a plasmonic nanoantenna. This numerical approach appears particularly suited for new applications in physics, chemistry, and biology such as plasmon-induced nanochemistry and catalysis, nanofluidics, photothermal cancer therapy, or phase-transition control at the nanoscale.

  15. Agribusiness model approach to territorial food development

    Directory of Open Access Journals (Sweden)

    Murcia Hector Horacio

    2011-04-01

    Full Text Available

    Several research efforts have coordinated the academic program of Agricultural Business Management from the University De La Salle (Bogota D.C., to the design and implementation of a sustainable agribusiness model applied to food development, with territorial projection. Rural development is considered as a process that aims to improve the current capacity and potential of the inhabitant of the sector, which refers not only to production levels and productivity of agricultural items. It takes into account the guidelines of the Organization of the United Nations “Millennium Development Goals” and considered the concept of sustainable food and agriculture development, including food security and nutrition in an integrated interdisciplinary context, with holistic and systemic dimension. Analysis is specified by a model with an emphasis on sustainable agribusiness production chains related to agricultural food items in a specific region. This model was correlated with farm (technical objectives, family (social purposes and community (collective orientations projects. Within this dimension are considered food development concepts and methodologies of Participatory Action Research (PAR. Finally, it addresses the need to link the results to low-income communities, within the concepts of the “new rurality”.

  16. Coupling approaches used in atmospheric entry models

    Science.gov (United States)

    Gritsevich, M. I.

    2012-09-01

    While a planet orbits the Sun, it is subject to impact by smaller objects, ranging from tiny dust particles and space debris to much larger asteroids and comets. Such collisions have taken place frequently over geological time and played an important role in the evolution of planets and the development of life on the Earth. Though the search for near-Earth objects addresses one of the main points of the Asteroid and Comet Hazard, one should not underestimate the useful information to be gleaned from smaller atmospheric encounters, known as meteors or fireballs. Not only do these events help determine the linkages between meteorites and their parent bodies; due to their relative regularity they provide a good statistical basis for analysis. For successful cases with found meteorites, the detailed atmospheric path record is an excellent tool to test and improve existing entry models assuring the robustness of their implementation. There are many more important scientific questions meteoroids help us to answer, among them: Where do these objects come from, what are their origins, physical properties and chemical composition? What are the shapes and bulk densities of the space objects which fully ablate in an atmosphere and do not reach the planetary surface? Which values are directly measured and which are initially assumed as input to various models? How to couple both fragmentation and ablation effects in the model, taking real size distribution of fragments into account? How to specify and speed up the recovery of a recently fallen meteorites, not letting weathering to affect samples too much? How big is the pre-atmospheric projectile to terminal body ratio in terms of their mass/volume? Which exact parameters beside initial mass define this ratio? More generally, how entering object affects Earth's atmosphere and (if applicable) Earth's surface? How to predict these impact consequences based on atmospheric trajectory data? How to describe atmospheric entry

  17. Applied Regression Modeling A Business Approach

    CERN Document Server

    Pardoe, Iain

    2012-01-01

    An applied and concise treatment of statistical regression techniques for business students and professionals who have little or no background in calculusRegression analysis is an invaluable statistical methodology in business settings and is vital to model the relationship between a response variable and one or more predictor variables, as well as the prediction of a response value given values of the predictors. In view of the inherent uncertainty of business processes, such as the volatility of consumer spending and the presence of market uncertainty, business professionals use regression a

  18. Bayesian Approach to Neuro-Rough Models for Modelling HIV

    CERN Document Server

    Marwala, Tshilidzi

    2007-01-01

    This paper proposes a new neuro-rough model for modelling the risk of HIV from demographic data. The model is formulated using Bayesian framework and trained using Markov Chain Monte Carlo method and Metropolis criterion. When the model was tested to estimate the risk of HIV infection given the demographic data it was found to give the accuracy of 62% as opposed to 58% obtained from a Bayesian formulated rough set model trained using Markov chain Monte Carlo method and 62% obtained from a Bayesian formulated multi-layered perceptron (MLP) model trained using hybrid Monte. The proposed model is able to combine the accuracy of the Bayesian MLP model and the transparency of Bayesian rough set model.

  19. Development of a computationally efficient urban modeling approach

    DEFF Research Database (Denmark)

    Wolfs, Vincent; Murla, Damian; Ntegeka, Victor;

    2016-01-01

    This paper presents a parsimonious and data-driven modelling approach to simulate urban floods. Flood levels simulated by detailed 1D-2D hydrodynamic models can be emulated using the presented conceptual modelling approach with a very short calculation time. In addition, the model detail can...... be adjust-ed, allowing the modeller to focus on flood-prone locations. This results in efficiently parameterized models that can be tailored to applications. The simulated flood levels are transformed into flood extent maps using a high resolution (0.5-meter) digital terrain model in GIS. To illustrate...... the developed methodology, a case study for the city of Ghent in Belgium is elaborated. The configured conceptual model mimics the flood levels of a detailed 1D-2D hydrodynamic InfoWorks ICM model accurately, while the calculation time is an order of magnitude of 106 times shorter than the original highly...

  20. Implicit moral evaluations: A multinomial modeling approach.

    Science.gov (United States)

    Cameron, C Daryl; Payne, B Keith; Sinnott-Armstrong, Walter; Scheffer, Julian A; Inzlicht, Michael

    2017-01-01

    Implicit moral evaluations-i.e., immediate, unintentional assessments of the wrongness of actions or persons-play a central role in supporting moral behavior in everyday life. Yet little research has employed methods that rigorously measure individual differences in implicit moral evaluations. In five experiments, we develop a new sequential priming measure-the Moral Categorization Task-and a multinomial model that decomposes judgment on this task into multiple component processes. These include implicit moral evaluations of moral transgression primes (Unintentional Judgment), accurate moral judgments about target actions (Intentional Judgment), and a directional tendency to judge actions as morally wrong (Response Bias). Speeded response deadlines reduced Intentional Judgment but not Unintentional Judgment (Experiment 1). Unintentional Judgment was stronger toward moral transgression primes than non-moral negative primes (Experiments 2-4). Intentional Judgment was associated with increased error-related negativity, a neurophysiological indicator of behavioral control (Experiment 4). Finally, people who voted for an anti-gay marriage amendment had stronger Unintentional Judgment toward gay marriage primes (Experiment 5). Across Experiments 1-4, implicit moral evaluations converged with moral personality: Unintentional Judgment about wrong primes, but not negative primes, was negatively associated with psychopathic tendencies and positively associated with moral identity and guilt proneness. Theoretical and practical applications of formal modeling for moral psychology are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Continuous Molecular Fields Approach Applied to Structure-Activity Modeling

    CERN Document Server

    Baskin, Igor I

    2013-01-01

    The Method of Continuous Molecular Fields is a universal approach to predict various properties of chemical compounds, in which molecules are represented by means of continuous fields (such as electrostatic, steric, electron density functions, etc). The essence of the proposed approach consists in performing statistical analysis of functional molecular data by means of joint application of kernel machine learning methods and special kernels which compare molecules by computing overlap integrals of their molecular fields. This approach is an alternative to traditional methods of building 3D structure-activity and structure-property models based on the use of fixed sets of molecular descriptors. The methodology of the approach is described in this chapter, followed by its application to building regression 3D-QSAR models and conducting virtual screening based on one-class classification models. The main directions of the further development of this approach are outlined at the end of the chapter.

  2. A forward modeling approach for interpreting impeller flow logs.

    Science.gov (United States)

    Parker, Alison H; West, L Jared; Odling, Noelle E; Bown, Richard T

    2010-01-01

    A rigorous and practical approach for interpretation of impeller flow log data to determine vertical variations in hydraulic conductivity is presented and applied to two well logs from a Chalk aquifer in England. Impeller flow logging involves measuring vertical flow speed in a pumped well and using changes in flow with depth to infer the locations and magnitudes of inflows into the well. However, the measured flow logs are typically noisy, which leads to spurious hydraulic conductivity values where simplistic interpretation approaches are applied. In this study, a new method for interpretation is presented, which first defines a series of physical models for hydraulic conductivity variation with depth and then fits the models to the data, using a regression technique. Some of the models will be rejected as they are physically unrealistic. The best model is then selected from the remaining models using a maximum likelihood approach. This balances model complexity against fit, for example, using Akaike's Information Criterion.

  3. An Adaptive Approach to Schema Classification for Data Warehouse Modeling

    Institute of Scientific and Technical Information of China (English)

    Hong-Ding Wang; Yun-Hai Tong; Shao-Hua Tan; Shi-Wei Tang; Dong-Qing Yang; Guo-Hui Sun

    2007-01-01

    Data warehouse (DW) modeling is a complicated task, involving both knowledge of business processes and familiarity with operational information systems structure and behavior. Existing DW modeling techniques suffer from the following major drawbacks -data-driven approach requires high levels of expertise and neglects the requirements of end users, while demand-driven approach lacks enterprise-wide vision and is regardless of existing models of underlying operational systems. In order to make up for those shortcomings, a method of classification of schema elements for DW modeling is proposed in this paper. We first put forward the vector space models for subjects and schema elements, then present an adaptive approach with self-tuning theory to construct context vectors of subjects, and finally classify the source schema elements into different subjects of the DW automatically. Benefited from the result of the schema elements classification, designers can model and construct a DW more easily.

  4. A Networks Approach to Modeling Enzymatic Reactions.

    Science.gov (United States)

    Imhof, P

    2016-01-01

    Modeling enzymatic reactions is a demanding task due to the complexity of the system, the many degrees of freedom involved and the complex, chemical, and conformational transitions associated with the reaction. Consequently, enzymatic reactions are not determined by precisely one reaction pathway. Hence, it is beneficial to obtain a comprehensive picture of possible reaction paths and competing mechanisms. By combining individually generated intermediate states and chemical transition steps a network of such pathways can be constructed. Transition networks are a discretized representation of a potential energy landscape consisting of a multitude of reaction pathways connecting the end states of the reaction. The graph structure of the network allows an easy identification of the energetically most favorable pathways as well as a number of alternative routes.

  5. Genetic Algorithm Approaches to Prebiobiotic Chemistry Modeling

    Science.gov (United States)

    Lohn, Jason; Colombano, Silvano

    1997-01-01

    We model an artificial chemistry comprised of interacting polymers by specifying two initial conditions: a distribution of polymers and a fixed set of reversible catalytic reactions. A genetic algorithm is used to find a set of reactions that exhibit a desired dynamical behavior. Such a technique is useful because it allows an investigator to determine whether a specific pattern of dynamics can be produced, and if it can, the reaction network found can be then analyzed. We present our results in the context of studying simplified chemical dynamics in theorized protocells - hypothesized precursors of the first living organisms. Our results show that given a small sample of plausible protocell reaction dynamics, catalytic reaction sets can be found. We present cases where this is not possible and also analyze the evolved reaction sets.

  6. Modeling Approaches for Describing Microbial Population Heterogeneity

    DEFF Research Database (Denmark)

    Lencastre Fernandes, Rita

    , ethanol and biomass throughout the reactor. This work has proven that the integration of CFD and population balance models, for describing the growth of a microbial population in a spatially heterogeneous reactor, is feasible, and that valuable insight on the interplay between flow and the dynamics......Although microbial populations are typically described by averaged properties, individual cells present a certain degree of variability. Indeed, initially clonal microbial populations develop into heterogeneous populations, even when growing in a homogeneous environment. A heterogeneous microbial......) to predict distributions of certain population properties including particle size, mass or volume, and molecular weight. Similarly, PBM allow for a mathematical description of distributed cell properties within microbial populations. Cell total protein content distributions (a measure of cell mass) have been...

  7. Hamiltonian approach to hybrid plasma models

    CERN Document Server

    Tronci, Cesare

    2010-01-01

    The Hamiltonian structures of several hybrid kinetic-fluid models are identified explicitly, upon considering collisionless Vlasov dynamics for the hot particles interacting with a bulk fluid. After presenting different pressure-coupling schemes for an ordinary fluid interacting with a hot gas, the paper extends the treatment to account for a fluid plasma interacting with an energetic ion species. Both current-coupling and pressure-coupling MHD schemes are treated extensively. In particular, pressure-coupling schemes are shown to require a transport-like term in the Vlasov kinetic equation, in order for the Hamiltonian structure to be preserved. The last part of the paper is devoted to studying the more general case of an energetic ion species interacting with a neutralizing electron background (hybrid Hall-MHD). Circulation laws and Casimir functionals are presented explicitly in each case.

  8. Modeling of phase equilibria with CPA using the homomorph approach

    DEFF Research Database (Denmark)

    Breil, Martin Peter; Tsivintzelis, Ioannis; Kontogeorgis, Georgios

    2011-01-01

    For association models, like CPA and SAFT, a classical approach is often used for estimating pure-compound and mixture parameters. According to this approach, the pure-compound parameters are estimated from vapor pressure and liquid density data. Then, the binary interaction parameters, kij, are ...

  9. A Constructive Neural-Network Approach to Modeling Psychological Development

    Science.gov (United States)

    Shultz, Thomas R.

    2012-01-01

    This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…

  10. A Constructive Neural-Network Approach to Modeling Psychological Development

    Science.gov (United States)

    Shultz, Thomas R.

    2012-01-01

    This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…

  11. Modular Modelling and Simulation Approach - Applied to Refrigeration Systems

    DEFF Research Database (Denmark)

    Sørensen, Kresten Kjær; Stoustrup, Jakob

    2008-01-01

    This paper presents an approach to modelling and simulation of the thermal dynamics of a refrigeration system, specifically a reefer container. A modular approach is used and the objective is to increase the speed and flexibility of the developed simulation environment. The refrigeration system...

  12. Pattern-based approach for logical traffic isolation forensic modelling

    CSIR Research Space (South Africa)

    Dlamini, I

    2009-08-01

    Full Text Available The use of design patterns usually changes the approach of software design and makes software development relatively easy. This paper extends work on a forensic model for Logical Traffic Isolation (LTI) based on Differentiated Services (Diff...

  13. A semantic-web approach for modeling computing infrastructures

    NARCIS (Netherlands)

    M. Ghijsen; J. van der Ham; P. Grosso; C. Dumitru; H. Zhu; Z. Zhao; C. de Laat

    2013-01-01

    This paper describes our approach to modeling computing infrastructures. Our main contribution is the Infrastructure and Network Description Language (INDL) ontology. The aim of INDL is to provide technology independent descriptions of computing infrastructures, including the physical resources as w

  14. Bayesian approach to decompression sickness model parameter estimation.

    Science.gov (United States)

    Howle, L E; Weber, P W; Nichols, J M

    2017-03-01

    We examine both maximum likelihood and Bayesian approaches for estimating probabilistic decompression sickness model parameters. Maximum likelihood estimation treats parameters as fixed values and determines the best estimate through repeated trials, whereas the Bayesian approach treats parameters as random variables and determines the parameter probability distributions. We would ultimately like to know the probability that a parameter lies in a certain range rather than simply make statements about the repeatability of our estimator. Although both represent powerful methods of inference, for models with complex or multi-peaked likelihoods, maximum likelihood parameter estimates can prove more difficult to interpret than the estimates of the parameter distributions provided by the Bayesian approach. For models of decompression sickness, we show that while these two estimation methods are complementary, the credible intervals generated by the Bayesian approach are more naturally suited to quantifying uncertainty in the model parameters.

  15. Modelling road accidents: An approach using structural time series

    Science.gov (United States)

    Junus, Noor Wahida Md; Ismail, Mohd Tahir

    2014-09-01

    In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.

  16. Functional state modelling approach validation for yeast and bacteria cultivations

    Science.gov (United States)

    Roeva, Olympia; Pencheva, Tania

    2014-01-01

    In this paper, the functional state modelling approach is validated for modelling of the cultivation of two different microorganisms: yeast (Saccharomyces cerevisiae) and bacteria (Escherichia coli). Based on the available experimental data for these fed-batch cultivation processes, three different functional states are distinguished, namely primary product synthesis state, mixed oxidative state and secondary product synthesis state. Parameter identification procedures for different local models are performed using genetic algorithms. The simulation results show high degree of adequacy of the models describing these functional states for both S. cerevisiae and E. coli cultivations. Thus, the local models are validated for the cultivation of both microorganisms. This fact is a strong structure model verification of the functional state modelling theory not only for a set of yeast cultivations, but also for bacteria cultivation. As such, the obtained results demonstrate the efficiency and efficacy of the functional state modelling approach. PMID:26740778

  17. An optimization approach to kinetic model reduction for combustion chemistry

    CERN Document Server

    Lebiedz, Dirk

    2013-01-01

    Model reduction methods are relevant when the computation time of a full convection-diffusion-reaction simulation based on detailed chemical reaction mechanisms is too large. In this article, we review a model reduction approach based on optimization of trajectories and show its applicability to realistic combustion models. As most model reduction methods, it identifies points on a slow invariant manifold based on time scale separation in the dynamics of the reaction system. The numerical approximation of points on the manifold is achieved by solving a semi-infinite optimization problem, where the dynamics enter the problem as constraints. The proof of existence of a solution for an arbitrarily chosen dimension of the reduced model (slow manifold) is extended to the case of realistic combustion models including thermochemistry by considering the properties of proper maps. The model reduction approach is finally applied to three models based on realistic reaction mechanisms: 1. ozone decomposition as a small t...

  18. Functional state modelling approach validation for yeast and bacteria cultivations.

    Science.gov (United States)

    Roeva, Olympia; Pencheva, Tania

    2014-09-03

    In this paper, the functional state modelling approach is validated for modelling of the cultivation of two different microorganisms: yeast (Saccharomyces cerevisiae) and bacteria (Escherichia coli). Based on the available experimental data for these fed-batch cultivation processes, three different functional states are distinguished, namely primary product synthesis state, mixed oxidative state and secondary product synthesis state. Parameter identification procedures for different local models are performed using genetic algorithms. The simulation results show high degree of adequacy of the models describing these functional states for both S. cerevisiae and E. coli cultivations. Thus, the local models are validated for the cultivation of both microorganisms. This fact is a strong structure model verification of the functional state modelling theory not only for a set of yeast cultivations, but also for bacteria cultivation. As such, the obtained results demonstrate the efficiency and efficacy of the functional state modelling approach.

  19. Molecular Modeling Approach to Cardiovascular Disease Targetting

    Directory of Open Access Journals (Sweden)

    Chandra Sekhar Akula,

    2010-05-01

    Full Text Available Cardiovascular disease, including stroke, is the leading cause of illness and death in the India. A number of studies have shown that inflammation of blood vessels is one of the major factors that increase the incidence of heart diseases, including arteriosclerosis (clogging of the arteries, stroke and myocardial infraction or heart attack. Studies have associated obesity and other components of metabolic syndrome, cardiovascular risk factors, with lowgradeinflammation. Furthermore, some findings suggest that drugs commonly prescribed to the lower cholesterol also reduce this inflammation, suggesting an additional beneficial effect of the stains. The recent development of angiotensin 11 (Ang11 receptor antagonists has enabled to improve significantly the tolerability profile of thisgroup of drugs while maintaining a high clinical efficacy. ACE2 is expressed predominantly in the endothelium and in renal tubular epithelium, and it thus may be an import new cardiovascular target. In the present study we modeled the structure of ACE and designed an inhibitor through using ARGUS lab and the validation of the Drug molecule is done basing on QSAR properties and Cache for this protein through CADD.

  20. Virtuous organization: A structural equation modeling approach

    Directory of Open Access Journals (Sweden)

    Majid Zamahani

    2013-02-01

    Full Text Available For years, the idea of virtue was unfavorable among researchers and virtues were traditionally considered as culture-specific, relativistic and they were supposed to be associated with social conservatism, religious or moral dogmatism, and scientific irrelevance. Virtue and virtuousness have been recently considered seriously among organizational researchers. The proposed study of this paper examines the relationships between leadership, organizational culture, human resource, structure and processes, care for community and virtuous organization. Structural equation modeling is employed to investigate the effects of each variable on other components. The data used in this study consists of questionnaire responses from employees in Payam e Noor University in Yazd province. A total of 250 questionnaires were sent out and a total of 211 valid responses were received. Our results have revealed that all the five variables have positive and significant impacts on virtuous organization. Among the five variables, organizational culture has the most direct impact (0.80 and human resource has the most total impact (0.844 on virtuous organization.

  1. Data Analysis A Model Comparison Approach, Second Edition

    CERN Document Server

    Judd, Charles M; Ryan, Carey S

    2008-01-01

    This completely rewritten classic text features many new examples, insights and topics including mediational, categorical, and multilevel models. Substantially reorganized, this edition provides a briefer, more streamlined examination of data analysis. Noted for its model-comparison approach and unified framework based on the general linear model, the book provides readers with a greater understanding of a variety of statistical procedures. This consistent framework, including consistent vocabulary and notation, is used throughout to develop fewer but more powerful model building techniques. T

  2. Comparison of approaches for parameter estimation on stochastic models: Generic least squares versus specialized approaches.

    Science.gov (United States)

    Zimmer, Christoph; Sahle, Sven

    2016-04-01

    Parameter estimation for models with intrinsic stochasticity poses specific challenges that do not exist for deterministic models. Therefore, specialized numerical methods for parameter estimation in stochastic models have been developed. Here, we study whether dedicated algorithms for stochastic models are indeed superior to the naive approach of applying the readily available least squares algorithm designed for deterministic models. We compare the performance of the recently developed multiple shooting for stochastic systems (MSS) method designed for parameter estimation in stochastic models, a stochastic differential equations based Bayesian approach and a chemical master equation based techniques with the least squares approach for parameter estimation in models of ordinary differential equations (ODE). As test data, 1000 realizations of the stochastic models are simulated. For each realization an estimation is performed with each method, resulting in 1000 estimates for each approach. These are compared with respect to their deviation to the true parameter and, for the genetic toggle switch, also their ability to reproduce the symmetry of the switching behavior. Results are shown for different set of parameter values of a genetic toggle switch leading to symmetric and asymmetric switching behavior as well as an immigration-death and a susceptible-infected-recovered model. This comparison shows that it is important to choose a parameter estimation technique that can treat intrinsic stochasticity and that the specific choice of this algorithm shows only minor performance differences.

  3. Modelling and Generating Ajax Applications: A Model-Driven Approach

    NARCIS (Netherlands)

    Gharavi, V.; Mesbah, A.; Van Deursen, A.

    2008-01-01

    Preprint of paper published in: IWWOST 2008 - 7th International Workshop on Web-Oriented Software Technologies, 14-15 July 2008 AJAX is a promising and rapidly evolving approach for building highly interactive web applications. In AJAX, user interface components and the event-based interaction betw

  4. Modelling and Generating Ajax Applications: A Model-Driven Approach

    NARCIS (Netherlands)

    Gharavi, V.; Mesbah, A.; Van Deursen, A.

    2008-01-01

    Preprint of paper published in: IWWOST 2008 - 7th International Workshop on Web-Oriented Software Technologies, 14-15 July 2008 AJAX is a promising and rapidly evolving approach for building highly interactive web applications. In AJAX, user interface components and the event-based interaction

  5. A novel approach to modeling and diagnosing the cardiovascular system

    Energy Technology Data Exchange (ETDEWEB)

    Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T. [Pacific Northwest Lab., Richland, WA (United States); Allen, P.A. [Life Link, Richland, WA (United States)

    1995-07-01

    A novel approach to modeling and diagnosing the cardiovascular system is introduced. A model exhibits a subset of the dynamics of the cardiovascular behavior of an individual by using a recurrent artificial neural network. Potentially, a model will be incorporated into a cardiovascular diagnostic system. This approach is unique in that each cardiovascular model is developed from physiological measurements of an individual. Any differences between the modeled variables and the variables of an individual at a given time are used for diagnosis. This approach also exploits sensor fusion to optimize the utilization of biomedical sensors. The advantage of sensor fusion has been demonstrated in applications including control and diagnostics of mechanical and chemical processes.

  6. Mathematical models for therapeutic approaches to control HIV disease transmission

    CERN Document Server

    Roy, Priti Kumar

    2015-01-01

    The book discusses different therapeutic approaches based on different mathematical models to control the HIV/AIDS disease transmission. It uses clinical data, collected from different cited sources, to formulate the deterministic as well as stochastic mathematical models of HIV/AIDS. It provides complementary approaches, from deterministic and stochastic points of view, to optimal control strategy with perfect drug adherence and also tries to seek viewpoints of the same issue from different angles with various mathematical models to computer simulations. The book presents essential methods and techniques for students who are interested in designing epidemiological models on HIV/AIDS. It also guides research scientists, working in the periphery of mathematical modeling, and helps them to explore a hypothetical method by examining its consequences in the form of a mathematical modelling and making some scientific predictions. The model equations, mathematical analysis and several numerical simulations that are...

  7. Asteroid modeling for testing spacecraft approach and landing.

    Science.gov (United States)

    Martin, Iain; Parkes, Steve; Dunstan, Martin; Rowell, Nick

    2014-01-01

    Spacecraft exploration of asteroids presents autonomous-navigation challenges that can be aided by virtual models to test and develop guidance and hazard-avoidance systems. Researchers have extended and applied graphics techniques to create high-resolution asteroid models to simulate cameras and other spacecraft sensors approaching and descending toward asteroids. A scalable model structure with evenly spaced vertices simplifies terrain modeling, avoids distortion at the poles, and enables triangle-strip definition for efficient rendering. To create the base asteroid models, this approach uses two-phase Poisson faulting and Perlin noise. It creates realistic asteroid surfaces by adding both crater models adapted from lunar terrain simulation and multiresolution boulders. The researchers evaluated the virtual asteroids by comparing them with real asteroid images, examining the slope distributions, and applying a surface-relative feature-tracking algorithm to the models.

  8. A model-driven approach to information security compliance

    Science.gov (United States)

    Correia, Anacleto; Gonçalves, António; Teodoro, M. Filomena

    2017-06-01

    The availability, integrity and confidentiality of information are fundamental to the long-term survival of any organization. Information security is a complex issue that must be holistically approached, combining assets that support corporate systems, in an extended network of business partners, vendors, customers and other stakeholders. This paper addresses the conception and implementation of information security systems, conform the ISO/IEC 27000 set of standards, using the model-driven approach. The process begins with the conception of a domain level model (computation independent model) based on information security vocabulary present in the ISO/IEC 27001 standard. Based on this model, after embedding in the model mandatory rules for attaining ISO/IEC 27001 conformance, a platform independent model is derived. Finally, a platform specific model serves the base for testing the compliance of information security systems with the ISO/IEC 27000 set of standards.

  9. Heuristic approaches to models and modeling in systems biology

    NARCIS (Netherlands)

    MacLeod, Miles

    2016-01-01

    Prediction and control sufficient for reliable medical and other interventions are prominent aims of modeling in systems biology. The short-term attainment of these goals has played a strong role in projecting the importance and value of the field. In this paper I identify the standard models must m

  10. A Model Management Approach for Co-Simulation Model Evaluation

    NARCIS (Netherlands)

    Zhang, X.C.; Broenink, Johannes F.; Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno

    2011-01-01

    Simulating formal models is a common means for validating the correctness of the system design and reduce the time-to-market. In most of the embedded control system design, multiple engineering disciplines and various domain-specific models are often involved, such as mechanical, control, software

  11. A New Detection Approach Based on the Maximum Entropy Model

    Institute of Scientific and Technical Information of China (English)

    DONG Xiaomei; XIANG Guang; YU Ge; LI Xiaohua

    2006-01-01

    The maximum entropy model was introduced and a new intrusion detection approach based on the maximum entropy model was proposed. The vector space model was adopted for data presentation. The minimal entropy partitioning method was utilized for attribute discretization. Experiments on the KDD CUP 1999 standard data set were designed and the experimental results were shown. The receiver operating characteristic(ROC) curve analysis approach was utilized to analyze the experimental results. The analysis results show that the proposed approach is comparable to those based on support vector machine(SVM) and outperforms those based on C4.5 and Naive Bayes classifiers. According to the overall evaluation result, the proposed approach is a little better than those based on SVM.

  12. LEXICAL APPROACH IN TEACHING TURKISH: A COLLOCATIONAL STUDY MODEL

    Directory of Open Access Journals (Sweden)

    Eser ÖRDEM

    2013-06-01

    Full Text Available Abstract This study intends to propose Lexical Approach (Lewis, 1998, 2002; Harwood, 2002 and a model for teaching Turkish as a foreign language so that this model can be used in classroom settings. This model was created by the researcher as a result of the studies carried out in applied linguistics (Hill, 20009 and memory (Murphy, 2004. Since one of the main problems of foreign language learners is to retrieve what they have learnt, Lewis (1998 and Wray (2008 assume that lexical approach is an alternative explanation to solve this problem.Unlike grammar translation method, this approach supports the idea that language is not composed of general grammar but strings of word and word combinations.In addition, lexical approach posits the idea that each word has tiw gramamtical properties, and therefore each dictionary is a potential grammar book. Foreign language learners can learn to use collocations, a basic principle of Lexical approach. Thus, learners can increase the level of retention.The concept of retrieval clue (Murphy, 2004 is considered the main element in this collocational study model because the main purpose of this model is boost fluency and help learners gain native-like accuracy while producing the target language. Keywords: Foreign language teaching, lexical approach, collocations, retrieval clue

  13. A Model-Driven Approach for Telecommunications Network Services Definition

    Science.gov (United States)

    Chiprianov, Vanea; Kermarrec, Yvon; Alff, Patrick D.

    Present day Telecommunications market imposes a short concept-to-market time for service providers. To reduce it, we propose a computer-aided, model-driven, service-specific tool, with support for collaborative work and for checking properties on models. We started by defining a prototype of the Meta-model (MM) of the service domain. Using this prototype, we defined a simple graphical modeling language specific for service designers. We are currently enlarging the MM of the domain using model transformations from Network Abstractions Layers (NALs). In the future, we will investigate approaches to ensure the support for collaborative work and for checking properties on models.

  14. Child human model development: a hybrid validation approach

    NARCIS (Netherlands)

    Forbes, P.A.; Rooij, L. van; Rodarius, C.; Crandall, J.

    2008-01-01

    The current study presents a development and validation approach of a child human body model that will help understand child impact injuries and improve the biofidelity of child anthropometric test devices. Due to the lack of fundamental child biomechanical data needed to fully develop such models a

  15. Modeling Alaska boreal forests with a controlled trend surface approach

    Science.gov (United States)

    Mo Zhou; Jingjing Liang

    2012-01-01

    An approach of Controlled Trend Surface was proposed to simultaneously take into consideration large-scale spatial trends and nonspatial effects. A geospatial model of the Alaska boreal forest was developed from 446 permanent sample plots, which addressed large-scale spatial trends in recruitment, diameter growth, and mortality. The model was tested on two sets of...

  16. Teaching Service Modelling to a Mixed Class: An Integrated Approach

    Science.gov (United States)

    Deng, Jeremiah D.; Purvis, Martin K.

    2015-01-01

    Service modelling has become an increasingly important area in today's telecommunications and information systems practice. We have adapted a Network Design course in order to teach service modelling to a mixed class of both the telecommunication engineering and information systems backgrounds. An integrated approach engaging mathematics teaching…

  17. Gray-box modelling approach for description of storage tunnel

    DEFF Research Database (Denmark)

    Harremoës, Poul; Carstensen, Jacob

    1999-01-01

    The dynamics of a storage tunnel is examined using a model based on on-line measured data and a combination of simple deterministic and black-box stochastic elements. This approach, called gray-box modeling, is a new promising methodology for giving an on-line state description of sewer systems...

  18. Child human model development: a hybrid validation approach

    NARCIS (Netherlands)

    Forbes, P.A.; Rooij, L. van; Rodarius, C.; Crandall, J.

    2008-01-01

    The current study presents a development and validation approach of a child human body model that will help understand child impact injuries and improve the biofidelity of child anthropometric test devices. Due to the lack of fundamental child biomechanical data needed to fully develop such models a

  19. Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode

  20. Refining the committee approach and uncertainty prediction in hydrological modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode

  1. Hybrid continuum-atomistic approach to model electrokinetics in nanofluidics

    Energy Technology Data Exchange (ETDEWEB)

    Amani, Ehsan, E-mail: eamani@aut.ac.ir; Movahed, Saeid, E-mail: smovahed@aut.ac.ir

    2016-06-07

    In this study, for the first time, a hybrid continuum-atomistic based model is proposed for electrokinetics, electroosmosis and electrophoresis, through nanochannels. Although continuum based methods are accurate enough to model fluid flow and electric potential in nanofluidics (in dimensions larger than 4 nm), ionic concentration is too low in nanochannels for the continuum assumption to be valid. On the other hand, the non-continuum based approaches are too time-consuming and therefore is limited to simple geometries, in practice. Here, to propose an efficient hybrid continuum-atomistic method of modelling the electrokinetics in nanochannels; the fluid flow and electric potential are computed based on continuum hypothesis coupled with an atomistic Lagrangian approach for the ionic transport. The results of the model are compared to and validated by the results of the molecular dynamics technique for a couple of case studies. Then, the influences of bulk ionic concentration, external electric field, size of nanochannel, and surface electric charge on the electrokinetic flow and ionic mass transfer are investigated, carefully. The hybrid continuum-atomistic method is a promising approach to model more complicated geometries and investigate more details of the electrokinetics in nanofluidics. - Highlights: • A hybrid continuum-atomistic model is proposed for electrokinetics in nanochannels. • The model is validated by molecular dynamics. • This is a promising approach to model more complicated geometries and physics.

  2. Modelling diversity in building occupant behaviour: a novel statistical approach

    DEFF Research Database (Denmark)

    Haldi, Frédéric; Calì, Davide; Andersen, Rune Korsholm

    2016-01-01

    We propose an advanced modelling framework to predict the scope and effects of behavioural diversity regarding building occupant actions on window openings, shading devices and lighting. We develop a statistical approach based on generalised linear mixed models to account for the longitudinal nat...

  3. Asteroid fragmentation approaches for modeling atmospheric energy deposition

    Science.gov (United States)

    Register, Paul J.; Mathias, Donovan L.; Wheeler, Lorien F.

    2017-03-01

    During asteroid entry, energy is deposited in the atmosphere through thermal ablation and momentum-loss due to aerodynamic drag. Analytic models of asteroid entry and breakup physics are used to compute the energy deposition, which can then be compared against measured light curves and used to estimate ground damage due to airburst events. This work assesses and compares energy deposition results from four existing approaches to asteroid breakup modeling, and presents a new model that combines key elements of those approaches. The existing approaches considered include a liquid drop or "pancake" model where the object is treated as a single deforming body, and a set of discrete fragment models where the object breaks progressively into individual fragments. The new model incorporates both independent fragments and aggregate debris clouds to represent a broader range of fragmentation behaviors and reproduce more detailed light curve features. All five models are used to estimate the energy deposition rate versus altitude for the Chelyabinsk meteor impact, and results are compared with an observationally derived energy deposition curve. Comparisons show that four of the five approaches are able to match the overall observed energy deposition profile, but the features of the combined model are needed to better replicate both the primary and secondary peaks of the Chelyabinsk curve.

  4. A Bayesian Approach for Analyzing Longitudinal Structural Equation Models

    Science.gov (United States)

    Song, Xin-Yuan; Lu, Zhao-Hua; Hser, Yih-Ing; Lee, Sik-Yum

    2011-01-01

    This article considers a Bayesian approach for analyzing a longitudinal 2-level nonlinear structural equation model with covariates, and mixed continuous and ordered categorical variables. The first-level model is formulated for measures taken at each time point nested within individuals for investigating their characteristics that are dynamically…

  5. An Empirical-Mathematical Modelling Approach to Upper Secondary Physics

    Science.gov (United States)

    Angell, Carl; Kind, Per Morten; Henriksen, Ellen K.; Guttersrud, Oystein

    2008-01-01

    In this paper we describe a teaching approach focusing on modelling in physics, emphasizing scientific reasoning based on empirical data and using the notion of multiple representations of physical phenomena as a framework. We describe modelling activities from a project (PHYS 21) and relate some experiences from implementation of the modelling…

  6. An Alternative Approach for Nonlinear Latent Variable Models

    Science.gov (United States)

    Mooijaart, Ab; Bentler, Peter M.

    2010-01-01

    In the last decades there has been an increasing interest in nonlinear latent variable models. Since the seminal paper of Kenny and Judd, several methods have been proposed for dealing with these kinds of models. This article introduces an alternative approach. The methodology involves fitting some third-order moments in addition to the means and…

  7. Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode

  8. Refining the committee approach and uncertainty prediction in hydrological modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode

  9. A multilevel approach to modeling of porous bioceramics

    Science.gov (United States)

    Mikushina, Valentina A.; Sidorenko, Yury N.

    2015-10-01

    The paper is devoted to discussion of multiscale models of heterogeneous materials using principles. The specificity of approach considered is the using of geometrical model of composites representative volume, which must be generated with taking the materials reinforcement structure into account. In framework of such model may be considered different physical processes which have influence on the effective mechanical properties of composite, in particular, the process of damage accumulation. It is shown that such approach can be used to prediction the value of composite macroscopic ultimate strength. As an example discussed the particular problem of the study the mechanical properties of biocomposite representing porous ceramics matrix filled with cortical bones tissue.

  10. Gray-box modelling approach for description of storage tunnel

    DEFF Research Database (Denmark)

    Harremoës, Poul; Carstensen, Jacob

    1999-01-01

    of the water in the overflow structures. The capacity of a pump draining the storage tunnel is estimated for two different rain events, revealing that the pump was malfunctioning during the first rain event. The proposed modeling approach can be used in automated online surveillance and control and implemented....... The model in the present paper provides on-line information on overflow volumes, pumping capacities, and remaining storage capacities. A linear overflow relation is found, differing significantly from the traditional deterministic modeling approach. The linearity of the formulas is explained by the inertia...

  11. A study of multidimensional modeling approaches for data warehouse

    Science.gov (United States)

    Yusof, Sharmila Mat; Sidi, Fatimah; Ibrahim, Hamidah; Affendey, Lilly Suriani

    2016-08-01

    Data warehouse system is used to support the process of organizational decision making. Hence, the system must extract and integrate information from heterogeneous data sources in order to uncover relevant knowledge suitable for decision making process. However, the development of data warehouse is a difficult and complex process especially in its conceptual design (multidimensional modeling). Thus, there have been various approaches proposed to overcome the difficulty. This study surveys and compares the approaches of multidimensional modeling and highlights the issues, trend and solution proposed to date. The contribution is on the state of the art of the multidimensional modeling design.

  12. Meta-analysis a structural equation modeling approach

    CERN Document Server

    Cheung, Mike W-L

    2015-01-01

    Presents a novel approach to conducting meta-analysis using structural equation modeling. Structural equation modeling (SEM) and meta-analysis are two powerful statistical methods in the educational, social, behavioral, and medical sciences. They are often treated as two unrelated topics in the literature. This book presents a unified framework on analyzing meta-analytic data within the SEM framework, and illustrates how to conduct meta-analysis using the metaSEM package in the R statistical environment. Meta-Analysis: A Structural Equation Modeling Approach begins by introducing the impo

  13. Modeling and Algorithmic Approaches to Constitutively-Complex, Microstructured Fluids

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Gregory H. [Univ. of California, Davis, CA (United States); Forest, Gregory [Univ. of California, Davis, CA (United States)

    2014-05-01

    We present a new multiscale model for complex fluids based on three scales: microscopic, kinetic, and continuum. We choose the microscopic level as Kramers' bead-rod model for polymers, which we describe as a system of stochastic differential equations with an implicit constraint formulation. The associated Fokker-Planck equation is then derived, and adiabatic elimination removes the fast momentum coordinates. Approached in this way, the kinetic level reduces to a dispersive drift equation. The continuum level is modeled with a finite volume Godunov-projection algorithm. We demonstrate computation of viscoelastic stress divergence using this multiscale approach.

  14. Metamodelling Approach and Software Tools for Physical Modelling and Simulation

    Directory of Open Access Journals (Sweden)

    Vitaliy Mezhuyev

    2015-02-01

    Full Text Available In computer science, metamodelling approach becomes more and more popular for the purpose of software systems development. In this paper, we discuss applicability of the metamodelling approach for development of software tools for physical modelling and simulation.To define a metamodel for physical modelling the analysis of physical models will be done. The result of such the analyses will show the invariant physical structures, we propose to use as the basic abstractions of the physical metamodel. It is a system of geometrical objects, allowing to build a spatial structure of physical models and to set a distribution of physical properties. For such geometry of distributed physical properties, the different mathematical methods can be applied. To prove the proposed metamodelling approach, we consider the developed prototypes of software tools.

  15. Social learning in Models and Cases - an Interdisciplinary Approach

    Science.gov (United States)

    Buhl, Johannes; De Cian, Enrica; Carrara, Samuel; Monetti, Silvia; Berg, Holger

    2016-04-01

    Our paper follows an interdisciplinary understanding of social learning. We contribute to the literature on social learning in transition research by bridging case-oriented research and modelling-oriented transition research. We start by describing selected theories on social learning in innovation, diffusion and transition research. We present theoretical understandings of social learning in techno-economic and agent-based modelling. Then we elaborate on empirical research on social learning in transition case studies. We identify and synthetize key dimensions of social learning in transition case studies. In the following we bridge between more formal and generalising modelling approaches towards social learning processes and more descriptive, individualising case study approaches by interpreting the case study analysis into a visual guide on functional forms of social learning typically identified in the cases. We then try to exemplarily vary functional forms of social learning in integrated assessment models. We conclude by drawing the lessons learned from the interdisciplinary approach - methodologically and empirically.

  16. Learning the Task Management Space of an Aircraft Approach Model

    Science.gov (United States)

    Krall, Joseph; Menzies, Tim; Davies, Misty

    2014-01-01

    Validating models of airspace operations is a particular challenge. These models are often aimed at finding and exploring safety violations, and aim to be accurate representations of real-world behavior. However, the rules governing the behavior are quite complex: nonlinear physics, operational modes, human behavior, and stochastic environmental concerns all determine the responses of the system. In this paper, we present a study on aircraft runway approaches as modeled in Georgia Tech's Work Models that Compute (WMC) simulation. We use a new learner, Genetic-Active Learning for Search-Based Software Engineering (GALE) to discover the Pareto frontiers defined by cognitive structures. These cognitive structures organize the prioritization and assignment of tasks of each pilot during approaches. We discuss the benefits of our approach, and also discuss future work necessary to enable uncertainty quantification.

  17. Building enterprise reuse program--A model-based approach

    Institute of Scientific and Technical Information of China (English)

    梅宏; 杨芙清

    2002-01-01

    Reuse is viewed as a realistically effective approach to solving software crisis. For an organization that wants to build a reuse program, technical and non-technical issues must be considered in parallel. In this paper, a model-based approach to building systematic reuse program is presented. Component-based reuse is currently a dominant approach to software reuse. In this approach, building the right reusable component model is the first important step. In order to achieve systematic reuse, a set of component models should be built from different perspectives. Each of these models will give a specific view of the components so as to satisfy different needs of different persons involved in the enterprise reuse program. There already exist some component models for reuse from technical perspectives. But less attention is paid to the reusable components from a non-technical view, especially from the view of process and management. In our approach, a reusable component model--FLP model for reusable component--is introduced. This model describes components from three dimensions (Form, Level, and Presentation) and views components and their relationships from the perspective of process and management. It determines the sphere of reusable components, the time points of reusing components in the development process, and the needed means to present components in terms of the abstraction level, logic granularity and presentation media. Being the basis on which the management and technical decisions are made, our model will be used as the kernel model to initialize and normalize a systematic enterprise reuse program.

  18. Current approaches to model extracellular electrical neural microstimulation

    Directory of Open Access Journals (Sweden)

    Sébastien eJoucla

    2014-02-01

    Full Text Available Nowadays, high-density microelectrode arrays provide unprecedented possibilities to precisely activate spatially well-controlled central nervous system (CNS areas. However, this requires optimizing stimulating devices, which in turn requires a good understanding of the effects of microstimulation on cells and tissues. In this context, modeling approaches provide flexible ways to predict the outcome of electrical stimulation in terms of CNS activation. In this paper, we present state-of-the-art modeling methods with sufficient details to allow the reader to rapidly build numerical models of neuronal extracellular microstimulation. These include 1 the computation of the electrical potential field created by the stimulation in the tissue, and 2 the response of a target neuron to this field. Two main approaches are described: First we describe the classical hybrid approach that combines the finite element modeling of the potential field with the calculation of the neuron’s response in a cable equation framework (compartmentalized neuron models. Then, we present a whole finite element approach allows the simultaneous calculation of the extracellular and intracellular potentials, by representing the neuronal membrane with a thin-film approximation. This approach was previously introduced in the frame of neural recording, but has never been implemented to determine the effect of extracellular stimulation on the neural response at a sub-compartment level. Here, we show on an example that the latter modeling scheme can reveal important sub-compartment behavior of the neural membrane that cannot be resolved using the hybrid approach. The goal of this paper is also to describe in detail the practical implementation of these methods to allow the reader to easily build new models using standard software packages. These modeling paradigms, depending on the situation, should help build more efficient high-density neural prostheses for CNS rehabilitation.

  19. Benchmarking novel approaches for modelling species range dynamics.

    Science.gov (United States)

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E

    2016-08-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches

  20. Application of the Interface Approach in Quantum Ising Models

    OpenAIRE

    Sen, Parongama

    1997-01-01

    We investigate phase transitions in the Ising model and the ANNNI model in transverse field using the interface approach. The exact result of the Ising chain in a transverse field is reproduced. We find that apart from the interfacial energy, there are two other response functions which show simple scaling behaviour. For the ANNNI model in a transverse field, the phase diagram can be fully studied in the region where a ferromagnetic to paramagnetic phase transition occurs. The other region ca...

  1. A Variable Flow Modelling Approach To Military End Strength Planning

    Science.gov (United States)

    2016-12-01

    System Dynamics (SD) model is ideal for strategic analysis as it encompasses all the behaviours of a system and how the behaviours are influenced by...Markov Chain Models Wang describes Markov chain theory as a mathematical tool used to investigate dynamic behaviours of a system in a discrete-time... MODELLING APPROACH TO MILITARY END STRENGTH PLANNING by Benjamin K. Grossi December 2016 Thesis Advisor: Kenneth Doerr Second Reader

  2. New Approaches in Usable Booster System Life Cycle Cost Modeling

    Science.gov (United States)

    2012-01-01

    Lean NPD practices (many) • Lean Production & Operations Practices (many) • Supply Chain Operations Reference ( SCOR ) Model , Best Practices Make Deliver...NEW APPROACHES IN REUSABLE BOOSTER SYSTEM LIFE CYCLE COST MODELING Edgar Zapata National Aeronautics and Space Administration Kennedy Space Center...Kennedy Space Center (KSC) and the Air Force Research Laboratory (AFRL). The work included the creation of a new cost estimating model and an LCC

  3. THE FAIRSHARES MODEL: AN ETHICAL APPROACH TO SOCIAL ENTERPRISE DEVELOPMENT?

    OpenAIRE

    Ridley-Duff, R.

    2015-01-01

    This paper is based on the keynote address to the 14th International Association of Public and Non-Profit Marketing (IAPNM) conference. It explore the question "What impact do ethical values in the FairShares Model have on social entrepreneurial behaviour?" In the first part, three broad approaches to social enterprise are set out: co-operative and mutual enterprises (CMEs), social and responsible businesses (SRBs) and charitable trading activities (CTAs). The ethics that guide each approach ...

  4. A computational language approach to modeling prose recall in schizophrenia.

    Science.gov (United States)

    Rosenstein, Mark; Diaz-Asper, Catherine; Foltz, Peter W; Elvevåg, Brita

    2014-06-01

    Many cortical disorders are associated with memory problems. In schizophrenia, verbal memory deficits are a hallmark feature. However, the exact nature of this deficit remains elusive. Modeling aspects of language features used in memory recall have the potential to provide means for measuring these verbal processes. We employ computational language approaches to assess time-varying semantic and sequential properties of prose recall at various retrieval intervals (immediate, 30 min and 24 h later) in patients with schizophrenia, unaffected siblings and healthy unrelated control participants. First, we model the recall data to quantify the degradation of performance with increasing retrieval interval and the effect of diagnosis (i.e., group membership) on performance. Next we model the human scoring of recall performance using an n-gram language sequence technique, and then with a semantic feature based on Latent Semantic Analysis. These models show that automated analyses of the recalls can produce scores that accurately mimic human scoring. The final analysis addresses the validity of this approach by ascertaining the ability to predict group membership from models built on the two classes of language features. Taken individually, the semantic feature is most predictive, while a model combining the features improves accuracy of group membership prediction slightly above the semantic feature alone as well as over the human rating approach. We discuss the implications for cognitive neuroscience of such a computational approach in exploring the mechanisms of prose recall.

  5. Intelligent Transportation and Evacuation Planning A Modeling-Based Approach

    CERN Document Server

    Naser, Arab

    2012-01-01

    Intelligent Transportation and Evacuation Planning: A Modeling-Based Approach provides a new paradigm for evacuation planning strategies and techniques. Recently, evacuation planning and modeling have increasingly attracted interest among researchers as well as government officials. This interest stems from the recent catastrophic hurricanes and weather-related events that occurred in the southeastern United States (Hurricane Katrina and Rita). The evacuation methods that were in place before and during the hurricanes did not work well and resulted in thousands of deaths. This book offers insights into the methods and techniques that allow for implementing mathematical-based, simulation-based, and integrated optimization and simulation-based engineering approaches for evacuation planning. This book also: Comprehensively discusses the application of mathematical models for evacuation and intelligent transportation modeling Covers advanced methodologies in evacuation modeling and planning Discusses principles a...

  6. A model selection approach to analysis of variance and covariance.

    Science.gov (United States)

    Alber, Susan A; Weiss, Robert E

    2009-06-15

    An alternative to analysis of variance is a model selection approach where every partition of the treatment means into clusters with equal value is treated as a separate model. The null hypothesis that all treatments are equal corresponds to the partition with all means in a single cluster. The alternative hypothesis correspond to the set of all other partitions of treatment means. A model selection approach can also be used for a treatment by covariate interaction, where the null hypothesis and each alternative correspond to a partition of treatments into clusters with equal covariate effects. We extend the partition-as-model approach to simultaneous inference for both treatment main effect and treatment interaction with a continuous covariate with separate partitions for the intercepts and treatment-specific slopes. The model space is the Cartesian product of the intercept partition and the slope partition, and we develop five joint priors for this model space. In four of these priors the intercept and slope partition are dependent. We advise on setting priors over models, and we use the model to analyze an orthodontic data set that compares the frictional resistance created by orthodontic fixtures. Copyright (c) 2009 John Wiley & Sons, Ltd.

  7. Towards a whole-cell modeling approach for synthetic biology

    Science.gov (United States)

    Purcell, Oliver; Jain, Bonny; Karr, Jonathan R.; Covert, Markus W.; Lu, Timothy K.

    2013-06-01

    Despite rapid advances over the last decade, synthetic biology lacks the predictive tools needed to enable rational design. Unlike established engineering disciplines, the engineering of synthetic gene circuits still relies heavily on experimental trial-and-error, a time-consuming and inefficient process that slows down the biological design cycle. This reliance on experimental tuning is because current modeling approaches are unable to make reliable predictions about the in vivo behavior of synthetic circuits. A major reason for this lack of predictability is that current models view circuits in isolation, ignoring the vast number of complex cellular processes that impinge on the dynamics of the synthetic circuit and vice versa. To address this problem, we present a modeling approach for the design of synthetic circuits in the context of cellular networks. Using the recently published whole-cell model of Mycoplasma genitalium, we examined the effect of adding genes into the host genome. We also investigated how codon usage correlates with gene expression and find agreement with existing experimental results. Finally, we successfully implemented a synthetic Goodwin oscillator in the whole-cell model. We provide an updated software framework for the whole-cell model that lays the foundation for the integration of whole-cell models with synthetic gene circuit models. This software framework is made freely available to the community to enable future extensions. We envision that this approach will be critical to transforming the field of synthetic biology into a rational and predictive engineering discipline.

  8. A transformation approach for collaboration based requirement models

    CERN Document Server

    Harbouche, Ahmed; Mokhtari, Aicha

    2012-01-01

    Distributed software engineering is widely recognized as a complex task. Among the inherent complexities is the process of obtaining a system design from its global requirement specification. This paper deals with such transformation process and suggests an approach to derive the behavior of a given system components, in the form of distributed Finite State Machines, from the global system requirements, in the form of an augmented UML Activity Diagrams notation. The process of the suggested approach is summarized in three steps: the definition of the appropriate source Meta-Model (requirements Meta-Model), the definition of the target Design Meta-Model and the definition of the rules to govern the transformation during the derivation process. The derivation process transforms the global system requirements described as UML diagram activities (extended with collaborations) to system roles behaviors represented as UML finite state machines. The approach is implemented using Atlas Transformation Language (ATL).

  9. A TRANSFORMATION APPROACH FOR COLLABORATION BASED REQUIREMENT MODELS

    Directory of Open Access Journals (Sweden)

    Ahmed Harbouche

    2012-02-01

    Full Text Available Distributed software engineering is widely recognized as a complex task. Among the inherent complexitiesis the process of obtaining a system design from its global requirement specification. This paper deals withsuch transformation process and suggests an approach to derive the behavior of a given systemcomponents, in the form of distributed Finite State Machines, from the global system requirements, in theform of an augmented UML Activity Diagrams notation. The process of the suggested approach issummarized in three steps: the definition of the appropriate source Meta-Model (requirements Meta-Model, the definition of the target Design Meta-Model and the definition of the rules to govern thetransformation during the derivation process. The derivation process transforms the global systemrequirements described as UML diagram activities (extended with collaborations to system rolesbehaviors represented as UML finite state machines. The approach is implemented using AtlasTransformation Language (ATL.

  10. An algebraic approach to modeling in software engineering

    Energy Technology Data Exchange (ETDEWEB)

    Loegel, G.J. [Superconducting Super Collider Lab., Dallas, TX (United States)]|[Michigan Univ., Ann Arbor, MI (United States); Ravishankar, C.V. [Michigan Univ., Ann Arbor, MI (United States)

    1993-09-01

    Our work couples the formalism of universal algebras with the engineering techniques of mathematical modeling to develop a new approach to the software engineering process. Our purpose in using this combination is twofold. First, abstract data types and their specification using universal algebras can be considered a common point between the practical requirements of software engineering and the formal specification of software systems. Second, mathematical modeling principles provide us with a means for effectively analyzing real-world systems. We first use modeling techniques to analyze a system and then represent the analysis using universal algebras. The rest of the software engineering process exploits properties of universal algebras that preserve the structure of our original model. This paper describes our software engineering process and our experience using it on both research and commercial systems. We need a new approach because current software engineering practices often deliver software that is difficult to develop and maintain. Formal software engineering approaches use universal algebras to describe ``computer science`` objects like abstract data types, but in practice software errors are often caused because ``real-world`` objects are improperly modeled. There is a large semantic gap between the customer`s objects and abstract data types. In contrast, mathematical modeling uses engineering techniques to construct valid models for real-world systems, but these models are often implemented in an ad hoc manner. A combination of the best features of both approaches would enable software engineering to formally specify and develop software systems that better model real systems. Software engineering, like mathematical modeling, should concern itself first and foremost with understanding a real system and its behavior under given circumstances, and then with expressing this knowledge in an executable form.

  11. DISTRIBUTED APPROACH to WEB PAGE CATEGORIZATION USING MAPREDUCE PROGRAMMING MODEL

    Directory of Open Access Journals (Sweden)

    P.Malarvizhi

    2011-12-01

    Full Text Available The web is a large repository of information and to facilitate the search and retrieval of pages from it,categorization of web documents is essential. An effective means to handle the complexity of information retrieval from the internet is through automatic classification of web pages. Although lots of automatic classification algorithms and systems have been presented, most of the existing approaches are computationally challenging. In order to overcome this challenge, we have proposed a parallel algorithm, known as MapReduce programming model to automatically categorize the web pages. This approach incorporates three concepts. They are web crawler, MapReduce programming model and the proposed web page categorization approach. Initially, we have utilized web crawler to mine the World Wide Web and the crawled web pages are then directly given as input to the MapReduce programming model. Here the MapReduce programming model adapted to our proposed web page categorization approach finds the appropriate category of the web page according to its content. The experimental results show that our proposed parallel web page categorization approach achieves satisfactory results in finding the right category for any given web page.

  12. Teaching Service Modelling to a Mixed Class: An Integrated Approach

    Directory of Open Access Journals (Sweden)

    Jeremiah D. DENG

    2015-04-01

    Full Text Available Service modelling has become an increasingly important area in today's telecommunications and information systems practice. We have adapted a Network Design course in order to teach service modelling to a mixed class of both the telecommunication engineering and information systems backgrounds. An integrated approach engaging mathematics teaching with strategies such as problem-solving, visualization, and the use of examples and simulations, has been developed. From assessment on student learning outcomes, it is indicated that the proposed course delivery approach succeeded in bringing out comparable and satisfactory performance from students of different educational backgrounds.

  13. A Spatial Clustering Approach for Stochastic Fracture Network Modelling

    Science.gov (United States)

    Seifollahi, S.; Dowd, P. A.; Xu, C.; Fadakar, A. Y.

    2014-07-01

    Fracture network modelling plays an important role in many application areas in which the behaviour of a rock mass is of interest. These areas include mining, civil, petroleum, water and environmental engineering and geothermal systems modelling. The aim is to model the fractured rock to assess fluid flow or the stability of rock blocks. One important step in fracture network modelling is to estimate the number of fractures and the properties of individual fractures such as their size and orientation. Due to the lack of data and the complexity of the problem, there are significant uncertainties associated with fracture network modelling in practice. Our primary interest is the modelling of fracture networks in geothermal systems and, in this paper, we propose a general stochastic approach to fracture network modelling for this application. We focus on using the seismic point cloud detected during the fracture stimulation of a hot dry rock reservoir to create an enhanced geothermal system; these seismic points are the conditioning data in the modelling process. The seismic points can be used to estimate the geographical extent of the reservoir, the amount of fracturing and the detailed geometries of fractures within the reservoir. The objective is to determine a fracture model from the conditioning data by minimizing the sum of the distances of the points from the fitted fracture model. Fractures are represented as line segments connecting two points in two-dimensional applications or as ellipses in three-dimensional (3D) cases. The novelty of our model is twofold: (1) it comprises a comprehensive fracture modification scheme based on simulated annealing and (2) it introduces new spatial approaches, a goodness-of-fit measure for the fitted fracture model, a measure for fracture similarity and a clustering technique for proposing a locally optimal solution for fracture parameters. We use a simulated dataset to demonstrate the application of the proposed approach

  14. Joint Modeling of Multiple Crimes: A Bayesian Spatial Approach

    Directory of Open Access Journals (Sweden)

    Hongqiang Liu

    2017-01-01

    Full Text Available A multivariate Bayesian spatial modeling approach was used to jointly model the counts of two types of crime, i.e., burglary and non-motor vehicle theft, and explore the geographic pattern of crime risks and relevant risk factors. In contrast to the univariate model, which assumes independence across outcomes, the multivariate approach takes into account potential correlations between crimes. Six independent variables are included in the model as potential risk factors. In order to fully present this method, both the multivariate model and its univariate counterpart are examined. We fitted the two models to the data and assessed them using the deviance information criterion. A comparison of the results from the two models indicates that the multivariate model was superior to the univariate model. Our results show that population density and bar density are clearly associated with both burglary and non-motor vehicle theft risks and indicate a close relationship between these two types of crime. The posterior means and 2.5% percentile of type-specific crime risks estimated by the multivariate model were mapped to uncover the geographic patterns. The implications, limitations and future work of the study are discussed in the concluding section.

  15. A Multiple Model Approach to Modeling Based on Fuzzy Support Vector Machines

    Institute of Scientific and Technical Information of China (English)

    冯瑞; 张艳珠; 宋春林; 邵惠鹤

    2003-01-01

    A new multiple models(MM) approach was proposed to model complex industrial process by using Fuzzy Support Vector Machines (F SVMs). By applying the proposed approach to a pH neutralization titration experi-ment, F_SVMs MM not only provides satisfactory approximation and generalization property, but also achieves superior performance to USOCPN multiple modeling method and single modeling method based on standard SVMs.

  16. Software sensors based on the grey-box modelling approach

    DEFF Research Database (Denmark)

    Carstensen, J.; Harremoës, P.; Strube, Rune

    1996-01-01

    In recent years the grey-box modelling approach has been applied to wastewater transportation and treatment Grey-box models are characterized by the combination of deterministic and stochastic terms to form a model where all the parameters are statistically identifiable from the on......-line measurements. With respect to the development of software sensors, the grey-box models possess two important features. Firstly, the on-line measurements can be filtered according to the grey-box model in order to remove noise deriving from the measuring equipment and controlling devices. Secondly, the grey-box...... models may contain terms which can be estimated on-line by use of the models and measurements. In this paper, it is demonstrated that many storage basins in sewer systems can be used as an on-line flow measurement provided that the basin is monitored on-line with a level transmitter and that a grey-box...

  17. Environmental Radiation Effects on Mammals A Dynamical Modeling Approach

    CERN Document Server

    Smirnova, Olga A

    2010-01-01

    This text is devoted to the theoretical studies of radiation effects on mammals. It uses the framework of developed deterministic mathematical models to investigate the effects of both acute and chronic irradiation in a wide range of doses and dose rates on vital body systems including hematopoiesis, small intestine and humoral immunity, as well as on the development of autoimmune diseases. Thus, these models can contribute to the development of the system and quantitative approaches in radiation biology and ecology. This text is also of practical use. Its modeling studies of the dynamics of granulocytopoiesis and thrombocytopoiesis in humans testify to the efficiency of employment of the developed models in the investigation and prediction of radiation effects on these hematopoietic lines. These models, as well as the properly identified models of other vital body systems, could provide a better understanding of the radiation risks to health. The modeling predictions will enable the implementation of more ef...

  18. The standard data model approach to patient record transfer.

    Science.gov (United States)

    Canfield, K; Silva, M; Petrucci, K

    1994-01-01

    This paper develops an approach to electronic data exchange of patient records from Ambulatory Encounter Systems (AESs). This approach assumes that the AES is based upon a standard data model. The data modeling standard used here is IDEFIX for Entity/Relationship (E/R) modeling. Each site that uses a relational database implementation of this standard data model (or a subset of it) can exchange very detailed patient data with other such sites using industry standard tools and without excessive programming efforts. This design is detailed below for a demonstration project between the research-oriented geriatric clinic at the Baltimore Veterans Affairs Medical Center (BVAMC) and the Laboratory for Healthcare Informatics (LHI) at the University of Maryland.

  19. Model selection and inference a practical information-theoretic approach

    CERN Document Server

    Burnham, Kenneth P

    1998-01-01

    This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...

  20. Real-space renormalization group approach to the Anderson model

    Science.gov (United States)

    Campbell, Eamonn

    Many of the most interesting electronic behaviours currently being studied are associated with strong correlations. In addition, many of these materials are disordered either intrinsically or due to doping. Solving interacting systems exactly is extremely computationally expensive, and approximate techniques developed for strongly correlated systems are not easily adapted to include disorder. As a non-interacting disordered model, it makes sense to consider the Anderson model as a first step in developing an approximate method of solution to the interacting and disordered Anderson-Hubbard model. Our renormalization group (RG) approach is modeled on that proposed by Johri and Bhatt [23]. We found an error in their work which we have corrected in our procedure. After testing the execution of the RG, we benchmarked the density of states and inverse participation ratio results against exact diagonalization. Our approach is significantly faster than exact diagonalization and is most accurate in the limit of strong disorder.

  1. Model Convolution: A Computational Approach to Digital Image Interpretation

    Science.gov (United States)

    Gardner, Melissa K.; Sprague, Brian L.; Pearson, Chad G.; Cosgrove, Benjamin D.; Bicek, Andrew D.; Bloom, Kerry; Salmon, E. D.

    2010-01-01

    Digital fluorescence microscopy is commonly used to track individual proteins and their dynamics in living cells. However, extracting molecule-specific information from fluorescence images is often limited by the noise and blur intrinsic to the cell and the imaging system. Here we discuss a method called “model-convolution,” which uses experimentally measured noise and blur to simulate the process of imaging fluorescent proteins whose spatial distribution cannot be resolved. We then compare model-convolution to the more standard approach of experimental deconvolution. In some circumstances, standard experimental deconvolution approaches fail to yield the correct underlying fluorophore distribution. In these situations, model-convolution removes the uncertainty associated with deconvolution and therefore allows direct statistical comparison of experimental and theoretical data. Thus, if there are structural constraints on molecular organization, the model-convolution method better utilizes information gathered via fluorescence microscopy, and naturally integrates experiment and theory. PMID:20461132

  2. MULTI MODEL DATA MINING APPROACH FOR HEART FAILURE PREDICTION

    Directory of Open Access Journals (Sweden)

    Priyanka H U

    2016-09-01

    Full Text Available Developing predictive modelling solutions for risk estimation is extremely challenging in health-care informatics. Risk estimation involves integration of heterogeneous clinical sources having different representation from different health-care provider making the task increasingly complex. Such sources are typically voluminous, diverse, and significantly change over the time. Therefore, distributed and parallel computing tools collectively termed big data tools are in need which can synthesize and assist the physician to make right clinical decisions. In this work we propose multi-model predictive architecture, a novel approach for combining the predictive ability of multiple models for better prediction accuracy. We demonstrate the effectiveness and efficiency of the proposed work on data from Framingham Heart study. Results show that the proposed multi-model predictive architecture is able to provide better accuracy than best model approach. By modelling the error of predictive models we are able to choose sub set of models which yields accurate results. More information was modelled into system by multi-level mining which has resulted in enhanced predictive accuracy.

  3. A new approach of high speed cutting modelling: SPH method

    OpenAIRE

    LIMIDO, Jérôme; Espinosa, Christine; Salaün, Michel; Lacome, Jean-Luc

    2006-01-01

    The purpose of this study is to introduce a new approach of high speed cutting numerical modelling. A lagrangian Smoothed Particle Hydrodynamics (SPH) based model is carried out using the Ls-Dyna software. SPH is a meshless method, thus large material distortions that occur in the cutting problem are easily managed and SPH contact control permits a “natural” workpiece/chip separation. Estimated chip morphology and cutting forces are compared to machining dedicated code results and experimenta...

  4. Schwinger boson approach to the fully screened Kondo model.

    Science.gov (United States)

    Rech, J; Coleman, P; Zarand, G; Parcollet, O

    2006-01-13

    We apply the Schwinger boson scheme to the fully screened Kondo model and generalize the method to include antiferromagnetic interactions between ions. Our approach captures the Kondo crossover from local moment behavior to a Fermi liquid with a nontrivial Wilson ratio. When applied to the two-impurity model, the mean-field theory describes the "Varma-Jones" quantum phase transition between a valence bond state and a heavy Fermi liquid.

  5. Kallen Lehman approach to 3D Ising model

    Science.gov (United States)

    Canfora, F.

    2007-03-01

    A “Kallen-Lehman” approach to Ising model, inspired by quantum field theory à la Regge, is proposed. The analogy with the Kallen-Lehman representation leads to a formula for the free-energy of the 3D model with few free parameters which could be matched with the numerical data. The possible application of this scheme to the spin glass case is shortly discussed.

  6. Modelling approaches in sedimentology: Introduction to the thematic issue

    Science.gov (United States)

    Joseph, Philippe; Teles, Vanessa; Weill, Pierre

    2016-09-01

    As an introduction to this thematic issue on "Modelling approaches in sedimentology", this paper gives an overview of the workshop held in Paris on 7 November 2013 during the 14th Congress of the French Association of Sedimentologists. A synthesis of the workshop in terms of concepts, spatial and temporal scales, constraining data, and scientific challenges is first presented, then a discussion on the possibility of coupling different models, the industrial needs, and the new potential domains of research is exposed.

  7. Modeling Electronic Circular Dichroism within the Polarizable Embedding Approach

    DEFF Research Database (Denmark)

    Nørby, Morten S; Olsen, Jógvan Magnus Haugaard; Steinmann, Casper

    2017-01-01

    We present a systematic investigation of the key components needed to model single chromophore electronic circular dichroism (ECD) within the polarizable embedding (PE) approach. By relying on accurate forms of the embedding potential, where especially the inclusion of local field effects...... sampling. We show that a significant number of snapshots are needed to avoid artifacts in the calculated electronic circular dichroism parameters due to insufficient configurational sampling, thus highlighting the efficiency of the PE model....

  8. Computational Models of Spreadsheet Development: Basis for Educational Approaches

    CERN Document Server

    Hodnigg, Karin; Mittermeir, Roland T

    2008-01-01

    Among the multiple causes of high error rates in spreadsheets, lack of proper training and of deep understanding of the computational model upon which spreadsheet computations rest might not be the least issue. The paper addresses this problem by presenting a didactical model focussing on cell interaction, thus exceeding the atomicity of cell computations. The approach is motivated by an investigation how different spreadsheet systems handle certain computational issues implied from moving cells, copy-paste operations, or recursion.

  9. Modeling Water Shortage Management Using an Object-Oriented Approach

    Science.gov (United States)

    Wang, J.; Senarath, S.; Brion, L.; Niedzialek, J.; Novoa, R.; Obeysekera, J.

    2007-12-01

    As a result of the increasing global population and the resulting urbanization, water shortage issues have received increased attention throughout the world . Water supply has not been able to keep up with increased demand for water, especially during times of drought. The use of an object-oriented (OO) approach coupled with efficient mathematical models is an effective tool in addressing discrepancies between water supply and demand. Object-oriented modeling has been proven powerful and efficient in simulating natural behavior. This research presents a way to model water shortage management using the OO approach. Three groups of conceptual components using the OO approach are designed for the management model. The first group encompasses evaluation of natural behaviors and possible related management options. This evaluation includes assessing any discrepancy that might exist between water demand and supply. The second group is for decision making which includes the determination of water use cutback amount and duration using established criteria. The third group is for implementation of the management options which are restrictions of water usage at a local or regional scale. The loop is closed through a feedback mechanism where continuity in the time domain is established. Like many other regions, drought management is very important in south Florida. The Regional Simulation Model (RSM) is a finite volume, fully integrated hydrologic model used by the South Florida Water Management District to evaluate regional response to various planning alternatives including drought management. A trigger module was developed for RSM that encapsulates the OO approach to water shortage management. Rigorous testing of the module was performed using historical south Florida conditions. Keywords: Object-oriented, modeling, water shortage management, trigger module, Regional Simulation Model

  10. Urban Modelling with Typological Approach. Case Study: Merida, Yucatan, Mexico

    Science.gov (United States)

    Rodriguez, A.

    2017-08-01

    In three-dimensional models of urban historical reconstruction, missed contextual architecture faces difficulties because it does not have much written references in contrast to the most important monuments. This is the case of Merida, Yucatan, Mexico during the Colonial Era (1542-1810), which has lost much of its heritage. An alternative to offer a hypothetical view of these elements is a typological - parametric definition that allows a 3D modeling approach to the most common features of this heritage evidence.

  11. Comparative flood damage model assessment: towards a European approach

    Directory of Open Access Journals (Sweden)

    B. Jongman

    2012-12-01

    Full Text Available There is a wide variety of flood damage models in use internationally, differing substantially in their approaches and economic estimates. Since these models are being used more and more as a basis for investment and planning decisions on an increasingly large scale, there is a need to reduce the uncertainties involved and develop a harmonised European approach, in particular with respect to the EU Flood Risks Directive. In this paper we present a qualitative and quantitative assessment of seven flood damage models, using two case studies of past flood events in Germany and the United Kingdom. The qualitative analysis shows that modelling approaches vary strongly, and that current methodologies for estimating infrastructural damage are not as well developed as methodologies for the estimation of damage to buildings. The quantitative results show that the model outcomes are very sensitive to uncertainty in both vulnerability (i.e. depth–damage functions and exposure (i.e. asset values, whereby the first has a larger effect than the latter. We conclude that care needs to be taken when using aggregated land use data for flood risk assessment, and that it is essential to adjust asset values to the regional economic situation and property characteristics. We call for the development of a flexible but consistent European framework that applies best practice from existing models while providing room for including necessary regional adjustments.

  12. Similarity transformation approach to identifiability analysis of nonlinear compartmental models.

    Science.gov (United States)

    Vajda, S; Godfrey, K R; Rabitz, H

    1989-04-01

    Through use of the local state isomorphism theorem instead of the algebraic equivalence theorem of linear systems theory, the similarity transformation approach is extended to nonlinear models, resulting in finitely verifiable sufficient and necessary conditions for global and local identifiability. The approach requires testing of certain controllability and observability conditions, but in many practical examples these conditions prove very easy to verify. In principle the method also involves nonlinear state variable transformations, but in all of the examples presented in the paper the transformations turn out to be linear. The method is applied to an unidentifiable nonlinear model and a locally identifiable nonlinear model, and these are the first nonlinear models other than bilinear models where the reason for lack of global identifiability is nontrivial. The method is also applied to two models with Michaelis-Menten elimination kinetics, both of considerable importance in pharmacokinetics, and for both of which the complicated nature of the algebraic equations arising from the Taylor series approach has hitherto defeated attempts to establish identifiability results for specific input functions.

  13. a Study of Urban Stormwater Modeling Approach in Singapore Catchment

    Science.gov (United States)

    Liew, S. C.; Liong, S. Y.; Vu, M. T.

    2011-07-01

    Urbanization has the direct effect of increasing the amount of surface runoff to be discharged through man-made drainage systems. Thus, Singapore's rapid urbanization has drawn great attention on flooding issues. In view of this, proper stormwater modeling approach is necessary for the assessment planning, design, and control of the storm and combines sewerage system. Impacts of urbanization on surface runoff and catchment flooding in Singapore are studied in this paper. In this study, the application of SOBEK-urban 1D is introduced on model catchments and a hypothetical catchment model is created for simulation purpose. Stormwater modeling approach using SOBEK-urban offers a comprehensive modeling tool for simple or extensive urban drainage systems consisting of sewers and open channels despite its size and complexity of the network. The findings from the present study show that stormwater modeling is able to identify flood area and the impact of the anticipated sea level on urban drainage network. Consequently, the performance of the urban drainage system can be improved and early prevention approaches can be carried out.

  14. The Generalised Ecosystem Modelling Approach in Radiological Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Klos, Richard

    2008-03-15

    An independent modelling capability is required by SSI in order to evaluate dose assessments carried out in Sweden by, amongst others, SKB. The main focus is the evaluation of the long-term radiological safety of radioactive waste repositories for both spent fuel and low-level radioactive waste. To meet the requirement for an independent modelling tool for use in biosphere dose assessments, SSI through its modelling team CLIMB commissioned the development of a new model in 2004, a project to produce an integrated model of radionuclides in the landscape. The generalised ecosystem modelling approach (GEMA) is the result. GEMA is a modular system of compartments representing the surface environment. It can be configured, through water and solid material fluxes, to represent local details in the range of ecosystem types found in the past, present and future Swedish landscapes. The approach is generic but fine tuning can be carried out using local details of the surface drainage system. The modular nature of the modelling approach means that GEMA modules can be linked to represent large scale surface drainage features over an extended domain in the landscape. System change can also be managed in GEMA, allowing a flexible and comprehensive model of the evolving landscape to be constructed. Environmental concentrations of radionuclides can be calculated and the GEMA dose pathway model provides a means of evaluating the radiological impact of radionuclide release to the surface environment. This document sets out the philosophy and details of GEMA and illustrates the functioning of the model with a range of examples featuring the recent CLIMB review of SKB's SR-Can assessment

  15. A vector relational data modeling approach to Insider threat intelligence

    Science.gov (United States)

    Kelly, Ryan F.; Anderson, Thomas S.

    2016-05-01

    We address the problem of detecting insider threats before they can do harm. In many cases, co-workers notice indications of suspicious activity prior to insider threat attacks. A partial solution to this problem requires an understanding of how information can better traverse the communication network between human intelligence and insider threat analysts. Our approach employs modern mobile communications technology and scale free network architecture to reduce the network distance between human sensors and analysts. In order to solve this problem, we propose a Vector Relational Data Modeling approach to integrate human "sensors," geo-location, and existing visual analytics tools. This integration problem is known to be difficult due to quadratic increases in cost associated with complex integration solutions. A scale free network integration approach using vector relational data modeling is proposed as a method for reducing network distance without increasing cost.

  16. A discrete Lagrangian based direct approach to macroscopic modelling

    Science.gov (United States)

    Sarkar, Saikat; Nowruzpour, Mohsen; Reddy, J. N.; Srinivasa, A. R.

    2017-01-01

    A direct discrete Lagrangian based approach, designed at a length scale of interest, to characterize the response of a body is proposed. The main idea is to understand the dynamics of a deformable body via a Lagrangian corresponding to a coupled interaction of rigid particles in the reduced dimension. We argue that the usual practice of describing the laws of a deformable body in the continuum limit is redundant, because for most of the practical problems, analytical solutions are not available. Since continuum limit is not taken, the framework automatically relaxes the requirement of differentiability of field variables. The discrete Lagrangian based approach is illustrated by deriving an equivalent of the Euler-Bernoulli beam model. A few test examples are solved, which demonstrate that the derived non-local model predicts lower deflections in comparison to classical Euler-Bernoulli beam solutions. We have also included crack propagation in thin structures for isotropic and anisotropic cases using the Lagrangian based approach.

  17. Reconciliation with oneself and with others: From approach to model

    Directory of Open Access Journals (Sweden)

    Nikolić-Ristanović Vesna

    2010-01-01

    Full Text Available The paper intends to present the approach to dealing with war and its consequences which was developed within Victimology Society of Serbia over the last five years, in the framework of Association Joint Action for Truth and Reconciliation (ZAIP. First, the short review of the Association and the process through which ZAIP approach to dealing with a past was developed is presented. Then, the detailed description of the approach itself, with identification of its most important specificities, is presented. In the conclusion, next steps, aimed at development of the model of reconciliation which will have the basis in ZAIP approach and which will be appropriate to social context of Serbia and its surrounding, are suggested.

  18. EXTENDE MODEL OF COMPETITIVITY THROUG APPLICATION OF NEW APPROACH DIRECTIVES

    Directory of Open Access Journals (Sweden)

    Slavko Arsovski

    2009-03-01

    Full Text Available The basic subject of this work is the model of new approach impact on quality and safety products, and competency of our companies. This work represents real hypothesis on the basis of expert's experiences, in regard to that the infrastructure with using new approach directives wasn't examined until now, it isn't known which product or industry of Serbia is related to directives of the new approach and CE mark, and it is not known which are effects of the use of the CE mark. This work should indicate existing quality reserves and product's safety, the level of possible competency improvement and increasing the profit by discharging new approach directive requires.

  19. Vibro-acoustics of porous materials - waveguide modeling approach

    DEFF Research Database (Denmark)

    Darula, Radoslav; Sorokin, Sergey V.

    2016-01-01

    The porous material is considered as a compound multi-layered waveguide (i.e. a fluid layer surrounded with elastic layers) with traction free boundary conditions. The attenuation of the vibro-acoustic waves in such a material is assessed. This approach is compared with a conventional Biot's model...... in porous materials....

  20. A novel Monte Carlo approach to hybrid local volatility models

    NARCIS (Netherlands)

    A.W. van der Stoep (Anton); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)

    2017-01-01

    textabstractWe present in a Monte Carlo simulation framework, a novel approach for the evaluation of hybrid local volatility [Risk, 1994, 7, 18–20], [Int. J. Theor. Appl. Finance, 1998, 1, 61–110] models. In particular, we consider the stochastic local volatility model—see e.g. Lipton et al. [Quant.

  1. Teaching Modeling with Partial Differential Equations: Several Successful Approaches

    Science.gov (United States)

    Myers, Joseph; Trubatch, David; Winkel, Brian

    2008-01-01

    We discuss the introduction and teaching of partial differential equations (heat and wave equations) via modeling physical phenomena, using a new approach that encompasses constructing difference equations and implementing these in a spreadsheet, numerically solving the partial differential equations using the numerical differential equation…

  2. A Behavioral Decision Making Modeling Approach Towards Hedging Services

    NARCIS (Netherlands)

    Pennings, J.M.E.; Candel, M.J.J.M.; Egelkraut, T.M.

    2003-01-01

    This paper takes a behavioral approach toward the market for hedging services. A behavioral decision-making model is developed that provides insight into how and why owner-managers decide the way they do regarding hedging services. Insight into those choice processes reveals information needed by fi

  3. A fuzzy approach to the Weighted Overlap Dominance model

    DEFF Research Database (Denmark)

    Franco de los Rios, Camilo Andres; Hougaard, Jens Leth; Nielsen, Kurt

    2013-01-01

    in an interactive way, where input data can take the form of uniquely-graded or interval-valued information. Here we explore the Weighted Overlap Dominance (WOD) model from a fuzzy perspective and its outranking approach to decision support and multidimensional interval analysis. Firstly, imprecision measures...

  4. Methodological Approach for Modeling of Multienzyme in-pot Processes

    DEFF Research Database (Denmark)

    Andrade Santacoloma, Paloma de Gracia; Roman Martinez, Alicia; Sin, Gürkan;

    2011-01-01

    This paper presents a methodological approach for modeling multi-enzyme in-pot processes. The methodology is exemplified stepwise through the bi-enzymatic production of N-acetyl-D-neuraminic acid (Neu5Ac) from N-acetyl-D-glucosamine (GlcNAc). In this case study, sensitivity analysis is also used...

  5. Towards modeling future energy infrastructures - the ELECTRA system engineering approach

    DEFF Research Database (Denmark)

    Uslar, Mathias; Heussen, Kai

    2016-01-01

    Within this contribution, we provide an overview based on previous work conducted in the ELECTRA project to come up with a consistent method for modeling the ELECTRA WoC approach according to the methods established with the M/490 mandate of the European Commission. We will motivate the use of th...

  6. Pruning Chinese trees : an experimental and modelling approach

    NARCIS (Netherlands)

    Zeng, Bo

    2002-01-01

    Pruning of trees, in which some branches are removed from the lower crown of a tree, has been extensively used in China in silvicultural management for many purposes. With an experimental and modelling approach, the effects of pruning on tree growth and on the harvest of plant material were studied.

  7. Evaluating Interventions with Multimethod Data: A Structural Equation Modeling Approach

    Science.gov (United States)

    Crayen, Claudia; Geiser, Christian; Scheithauer, Herbert; Eid, Michael

    2011-01-01

    In many intervention and evaluation studies, outcome variables are assessed using a multimethod approach comparing multiple groups over time. In this article, we show how evaluation data obtained from a complex multitrait-multimethod-multioccasion-multigroup design can be analyzed with structural equation models. In particular, we show how the…

  8. Teaching Modeling with Partial Differential Equations: Several Successful Approaches

    Science.gov (United States)

    Myers, Joseph; Trubatch, David; Winkel, Brian

    2008-01-01

    We discuss the introduction and teaching of partial differential equations (heat and wave equations) via modeling physical phenomena, using a new approach that encompasses constructing difference equations and implementing these in a spreadsheet, numerically solving the partial differential equations using the numerical differential equation…

  9. A Metacognitive-Motivational Model of Surface Approach to Studying

    Science.gov (United States)

    Spada, Marcantonio M.; Moneta, Giovanni B.

    2012-01-01

    In this study, we put forward and tested a model of how surface approach to studying during examination preparation is influenced by the trait variables of motivation and metacognition and the state variables of avoidance coping and evaluation anxiety. A sample of 528 university students completed, one week before examinations, the following…

  10. A New Approach for Testing the Rasch Model

    Science.gov (United States)

    Kubinger, Klaus D.; Rasch, Dieter; Yanagida, Takuya

    2011-01-01

    Though calibration of an achievement test within psychological and educational context is very often carried out by the Rasch model, data sampling is hardly designed according to statistical foundations. However, Kubinger, Rasch, and Yanagida (2009) recently suggested an approach for the determination of sample size according to a given Type I and…

  11. Comparing State SAT Scores Using a Mixture Modeling Approach

    Science.gov (United States)

    Kim, YoungKoung Rachel

    2009-01-01

    Presented at the national conference for AERA (American Educational Research Association) in April 2009. The large variability of SAT taker population across states makes state-by-state comparisons of the SAT scores challenging. Using a mixture modeling approach, therefore, the current study presents a method of identifying subpopulations in terms…

  12. The Bipolar Approach: A Model for Interdisciplinary Art History Courses.

    Science.gov (United States)

    Calabrese, John A.

    1993-01-01

    Describes a college level art history course based on the opposing concepts of Classicism and Romanticism. Contends that all creative work, such as film or architecture, can be categorized according to this bipolar model. Includes suggestions for objects to study and recommends this approach for art education at all education levels. (CFR)

  13. Non-frontal model based approach to forensic face recognition

    NARCIS (Netherlands)

    Dutta, Abhishek; Veldhuis, Raymond; Spreeuwers, Luuk

    2012-01-01

    In this paper, we propose a non-frontal model based approach which ensures that a face recognition system always gets to compare images having similar view (or pose). This requires a virtual suspect reference set that consists of non-frontal suspect images having pose similar to the surveillance vie

  14. Smeared crack modelling approach for corrosion-induced concrete damage

    DEFF Research Database (Denmark)

    Thybo, Anna Emilie Anusha; Michel, Alexander; Stang, Henrik

    2017-01-01

    compared to experimental data obtained by digital image correlation and published in the literature. Excellent agreements between experimentally observed and numerically predicted crack patterns at the micro and macro scale indicate the capability of the modelling approach to accurately capture corrosion...

  15. Towards modeling future energy infrastructures - the ELECTRA system engineering approach

    DEFF Research Database (Denmark)

    Uslar, Mathias; Heussen, Kai

    2016-01-01

    Within this contribution, we provide an overview based on previous work conducted in the ELECTRA project to come up with a consistent method for modeling the ELECTRA WoC approach according to the methods established with the M/490 mandate of the European Commission. We will motivate the use...

  16. Atomistic approach for modeling metal-semiconductor interfaces

    DEFF Research Database (Denmark)

    Stradi, Daniele; Martinez, Umberto; Blom, Anders

    2016-01-01

    realistic metal-semiconductor interfaces and allows for a direct comparison between theory and experiments via the I–V curve. In particular, it will be demonstrated how doping — and bias — modifies the Schottky barrier, and how finite size models (the slab approach) are unable to describe these interfaces...

  17. CFD Approaches for Modelling Bubble Entrainment by an Impinging Jet

    Directory of Open Access Journals (Sweden)

    Martin Schmidtke

    2009-01-01

    Full Text Available This contribution presents different approaches for the modeling of gas entrainment under water by a plunging jet. Since the generation of bubbles happens on a scale which is smaller than the bubbles, this process cannot be resolved in meso-scale simulations, which include the full length of the jet and its environment. This is why the gas entrainment has to be modeled in meso-scale simulations. In the frame of a Euler-Euler simulation, the local morphology of the phases has to be considered in the drag model. For example, the gas is a continuous phase above the water level but bubbly below the water level. Various drag models are tested and their influence on the gas void fraction below the water level is discussed. The algebraic interface area density (AIAD model applies a drag coefficient for bubbles and a different drag coefficient for the free surface. If the AIAD model is used for the simulation of impinging jets, the gas entrainment depends on the free parameters included in this model. The calculated gas entrainment can be adapted via these parameters. Therefore, an advanced AIAD approach could be used in future for the implementation of models (e.g., correlations for the gas entrainment.

  18. Approach for workflow modeling using π-calculus

    Institute of Scientific and Technical Information of China (English)

    杨东; 张申生

    2003-01-01

    As a variant of process algebra, π-calculus can describe the interactions between evolving processes. By modeling activity as a process interacting with other processes through ports, this paper presents a new approach: representing workilow models using ~-calculus. As a result, the model can characterize the dynamic behaviors of the workflow process in terms of the LTS ( Labeled Transition Semantics) semantics of π-calculus. The main advantage of the worktlow model's formal semantic is that it allows for verification of the model's properties, such as deadlock-free and normal termination. Moreover, the equivalence of worktlow models can be checked thlx)ugh weak bisimulation theorem in the π-caleulus, thus facilitating the optimizationof business processes.

  19. Approach for workflow modeling using π-calculus

    Institute of Scientific and Technical Information of China (English)

    杨东; 张申生

    2003-01-01

    As a variant of process algebra, π-calculus can describe the interactions between evolving processes. By modeling activity as a process interacting with other processes through ports, this paper presents a new approach: representing workflow models using π-calculus. As a result, the model can characterize the dynamic behaviors of the workflow process in terms of the LTS (Labeled Transition Semantics) semantics of π-calculus. The main advantage of the workflow model's formal semantic is that it allows for verification of the model's properties, such as deadlock-free and normal termination. Moreover, the equivalence of workflow models can be checked through weak bisimulation theorem in the π-calculus, thus facilitating the optimization of business processes.

  20. Multiphysics modeling using COMSOL a first principles approach

    CERN Document Server

    Pryor, Roger W

    2011-01-01

    Multiphysics Modeling Using COMSOL rapidly introduces the senior level undergraduate, graduate or professional scientist or engineer to the art and science of computerized modeling for physical systems and devices. It offers a step-by-step modeling methodology through examples that are linked to the Fundamental Laws of Physics through a First Principles Analysis approach. The text explores a breadth of multiphysics models in coordinate systems that range from 1D to 3D and introduces the readers to the numerical analysis modeling techniques employed in the COMSOL Multiphysics software. After readers have built and run the examples, they will have a much firmer understanding of the concepts, skills, and benefits acquired from the use of computerized modeling techniques to solve their current technological problems and to explore new areas of application for their particular technological areas of interest.

  1. Evaluation of Workflow Management Systems - A Meta Model Approach

    Directory of Open Access Journals (Sweden)

    Michael Rosemann

    1998-11-01

    Full Text Available The automated enactment of processes through the use of workflow management systems enables the outsourcing of the control flow from application systems. By now a large number of systems, that follow different workflow paradigms, are available. This leads to the problem of selecting the appropriate workflow management system for a given situation. In this paper we outline the benefits of a meta model approach for the evaluation and comparison of different workflow management systems. After a general introduction on the topic of meta modeling the meta models of the workflow management systems WorkParty (Siemens Nixdorf and FlowMark (IBM are compared as an example. These product specific meta models can be generalized to meta reference models, which helps to specify a workflow methodology. Exemplary, an organisational reference meta model is presented, which helps users in specifying their requirements for a workflow management system.

  2. A simplified GIS approach to modeling global leaf water isoscapes.

    Directory of Open Access Journals (Sweden)

    Jason B West

    Full Text Available The stable hydrogen (delta(2H and oxygen (delta(18O isotope ratios of organic and inorganic materials record biological and physical processes through the effects of substrate isotopic composition and fractionations that occur as reactions proceed. At large scales, these processes can exhibit spatial predictability because of the effects of coherent climatic patterns over the Earth's surface. Attempts to model spatial variation in the stable isotope ratios of water have been made for decades. Leaf water has a particular importance for some applications, including plant organic materials that record spatial and temporal climate variability and that may be a source of food for migrating animals. It is also an important source of the variability in the isotopic composition of atmospheric gases. Although efforts to model global-scale leaf water isotope ratio spatial variation have been made (especially of delta(18O, significant uncertainty remains in models and their execution across spatial domains. We introduce here a Geographic Information System (GIS approach to the generation of global, spatially-explicit isotope landscapes (= isoscapes of "climate normal" leaf water isotope ratios. We evaluate the approach and the resulting products by comparison with simulation model outputs and point measurements, where obtainable, over the Earth's surface. The isoscapes were generated using biophysical models of isotope fractionation and spatially continuous precipitation isotope and climate layers as input model drivers. Leaf water delta(18O isoscapes produced here generally agreed with latitudinal averages from GCM/biophysical model products, as well as mean values from point measurements. These results show global-scale spatial coherence in leaf water isotope ratios, similar to that observed for precipitation and validate the GIS approach to modeling leaf water isotopes. These results demonstrate that relatively simple models of leaf water enrichment

  3. Polynomial Chaos Expansion Approach to Interest Rate Models

    Directory of Open Access Journals (Sweden)

    Luca Di Persio

    2015-01-01

    Full Text Available The Polynomial Chaos Expansion (PCE technique allows us to recover a finite second-order random variable exploiting suitable linear combinations of orthogonal polynomials which are functions of a given stochastic quantity ξ, hence acting as a kind of random basis. The PCE methodology has been developed as a mathematically rigorous Uncertainty Quantification (UQ method which aims at providing reliable numerical estimates for some uncertain physical quantities defining the dynamic of certain engineering models and their related simulations. In the present paper, we use the PCE approach in order to analyze some equity and interest rate models. In particular, we take into consideration those models which are based on, for example, the Geometric Brownian Motion, the Vasicek model, and the CIR model. We present theoretical as well as related concrete numerical approximation results considering, without loss of generality, the one-dimensional case. We also provide both an efficiency study and an accuracy study of our approach by comparing its outputs with the ones obtained adopting the Monte Carlo approach, both in its standard and its enhanced version.

  4. Popularity Modeling for Mobile Apps: A Sequential Approach.

    Science.gov (United States)

    Zhu, Hengshu; Liu, Chuanren; Ge, Yong; Xiong, Hui; Chen, Enhong

    2015-07-01

    The popularity information in App stores, such as chart rankings, user ratings, and user reviews, provides an unprecedented opportunity to understand user experiences with mobile Apps, learn the process of adoption of mobile Apps, and thus enables better mobile App services. While the importance of popularity information is well recognized in the literature, the use of the popularity information for mobile App services is still fragmented and under-explored. To this end, in this paper, we propose a sequential approach based on hidden Markov model (HMM) for modeling the popularity information of mobile Apps toward mobile App services. Specifically, we first propose a popularity based HMM (PHMM) to model the sequences of the heterogeneous popularity observations of mobile Apps. Then, we introduce a bipartite based method to precluster the popularity observations. This can help to learn the parameters and initial values of the PHMM efficiently. Furthermore, we demonstrate that the PHMM is a general model and can be applicable for various mobile App services, such as trend based App recommendation, rating and review spam detection, and ranking fraud detection. Finally, we validate our approach on two real-world data sets collected from the Apple Appstore. Experimental results clearly validate both the effectiveness and efficiency of the proposed popularity modeling approach.

  5. On a Markovian approach for modeling passive solar devices

    Energy Technology Data Exchange (ETDEWEB)

    Bottazzi, F.; Liebling, T.M. (Chaire de Recherche Operationelle, Ecole Polytechnique Federale de Lausanne (Switzerland)); Scartezzini, J.L.; Nygaard-Ferguson, M. (Lab. d' Energie Solaire et de Physique du Batiment, Ecole Polytechnique Federale de Lausanne (Switzerland))

    1991-01-01

    Stochastic models for the analysis of the energy and thermal comfort performances of passive solar devices have been increasingly studied for over a decade. A new approach to thermal building modeling, based on Markov chains, is proposed here to combine both the accuracy of traditional dynamic simulation with the practical advantages of simplified methods. A main difficulty of the Markovian approach is the discretization of the system variables. Efficient procedures have been developed to carry out this discretization and several numerical experiments have been performed to analyze the possibilities and limitations of the Markovian model. Despite its restrictive assumptions, it will be shown that accurate results are indeed obtained by this method. However, due to discretization, computer memory reqirements are more than inversely proportional to accuracy. (orig.).

  6. Disturbed state concept as unified constitutive modeling approach

    Directory of Open Access Journals (Sweden)

    Chandrakant S. Desai

    2016-06-01

    Full Text Available A unified constitutive modeling approach is highly desirable to characterize a wide range of engineering materials subjected simultaneously to the effect of a number of factors such as elastic, plastic and creep deformations, stress path, volume change, microcracking leading to fracture, failure and softening, stiffening, and mechanical and environmental forces. There are hardly available such unified models. The disturbed state concept (DSC is considered to be a unified approach and is able to provide material characterization for almost all of the above factors. This paper presents a description of the DSC, and statements for determination of parameters based on triaxial, multiaxial and interface tests. Statements of DSC and validation at the specimen level and at the boundary value problem levels are also presented. An extensive list of publications by the author and others is provided at the end. The DSC is considered to be a unique and versatile procedure for modeling behaviors of engineering materials and interfaces.

  7. Disturbed state concept as unified constitutive modeling approach

    Institute of Scientific and Technical Information of China (English)

    Chandrakant S. Desai

    2016-01-01

    A unified constitutive modeling approach is highly desirable to characterize a wide range of engineering materials subjected simultaneously to the effect of a number of factors such as elastic, plastic and creep deformations, stress path, volume change, microcracking leading to fracture, failure and softening, stiffening, and mechanical and environmental forces. There are hardly available such unified models. The disturbed state concept (DSC) is considered to be a unified approach and is able to provide material characterization for almost all of the above factors. This paper presents a description of the DSC, and statements for determination of parameters based on triaxial, multiaxial and interface tests. Statements of DSC and validation at the specimen level and at the boundary value problem levels are also presented. An extensive list of publications by the author and others is provided at the end. The DSC is considered to be a unique and versatile procedure for modeling behaviors of engineering materials and interfaces.

  8. Proposal: A Hybrid Dictionary Modelling Approach for Malay Tweet Normalization

    Science.gov (United States)

    Muhamad, Nor Azlizawati Binti; Idris, Norisma; Arshi Saloot, Mohammad

    2017-02-01

    Malay Twitter message presents a special deviation from the original language. Malay Tweet widely used currently by Twitter users, especially at Malaya archipelago. Thus, it is important to make a normalization system which can translated Malay Tweet language into the standard Malay language. Some researchers have conducted in natural language processing which mainly focuses on normalizing English Twitter messages, while few studies have been done for normalize Malay Tweets. This paper proposes an approach to normalize Malay Twitter messages based on hybrid dictionary modelling methods. This approach normalizes noisy Malay twitter messages such as colloquially language, novel words, and interjections into standard Malay language. This research will be used Language Model and N-grams model.

  9. Towards a CPN-Based Modelling Approach for Reconciling Verification and Implementation of Protocol Models

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge; Kristensen, Lars Michael

    2013-01-01

    and implementation. Our approach has been developed in the context of the Coloured Petri Nets (CPNs) modelling language. We illustrate our approach by presenting a descriptive specification model of the Websocket protocol which is currently under development by the Internet Engineering Task Force (IETF), and we show......Formal modelling of protocols is often aimed at one specific purpose such as verification or automatically generating an implementation. This leads to models that are useful for one purpose, but not for others. Being able to derive models for verification and implementation from a single model...

  10. ON SOME APPROACHES TO ECONOMICMATHEMATICAL MODELING OF SMALL BUSINESS

    Directory of Open Access Journals (Sweden)

    Orlov A. I.

    2015-04-01

    Full Text Available Small business is an important part of modern Russian economy. We give a wide panorama developed by us of possible approaches to the construction of economic-mathematical models that may be useful to describe the dynamics of small businesses, as well as management. As for the description of certain problems of small business can use a variety of types of economic-mathematical and econometric models, we found it useful to consider a fairly wide range of such models, which resulted in quite a short description of the specific models. In this description of the models brought to such a level that an experienced professional in the field of economic-mathematical modeling could, if necessary, to develop their own specific model to the stage of design formulas and numerical results. Particular attention is paid to the use of statistical methods of non-numeric data, the most pressing at the moment. Are considered the problems of economic-mathematical modeling in solving problems of small business marketing. We have accumulated some experience in application of the methodology of economic-mathematical modeling in solving practical problems in small business marketing, in particular in the field of consumer goods and industrial purposes, educational services, as well as in the analysis and modeling of inflation, taxation and others. In marketing models of decision making theory we apply rankings and ratings. Is considered the problem of comparing averages. We present some models of the life cycle of small businesses - flow model projects, model of capture niches, and model of niche selection. We discuss the development of research on economic-mathematical modeling of small businesses

  11. A validated approach for modeling collapse of steel structures

    Science.gov (United States)

    Saykin, Vitaliy Victorovich

    A civil engineering structure is faced with many hazardous conditions such as blasts, earthquakes, hurricanes, tornadoes, floods, and fires during its lifetime. Even though structures are designed for credible events that can happen during a lifetime of the structure, extreme events do happen and cause catastrophic failures. Understanding the causes and effects of structural collapse is now at the core of critical areas of national need. One factor that makes studying structural collapse difficult is the lack of full-scale structural collapse experimental test results against which researchers could validate their proposed collapse modeling approaches. The goal of this work is the creation of an element deletion strategy based on fracture models for use in validated prediction of collapse of steel structures. The current work reviews the state-of-the-art of finite element deletion strategies for use in collapse modeling of structures. It is shown that current approaches to element deletion in collapse modeling do not take into account stress triaxiality in vulnerable areas of the structure, which is important for proper fracture and element deletion modeling. The report then reviews triaxiality and its role in fracture prediction. It is shown that fracture in ductile materials is a function of triaxiality. It is also shown that, depending on the triaxiality range, different fracture mechanisms are active and should be accounted for. An approach using semi-empirical fracture models as a function of triaxiality are employed. The models to determine fracture initiation, softening and subsequent finite element deletion are outlined. This procedure allows for stress-displacement softening at an integration point of a finite element in order to subsequently remove the element. This approach avoids abrupt changes in the stress that would create dynamic instabilities, thus making the results more reliable and accurate. The calibration and validation of these models are

  12. GEOSPATIAL MODELLING APPROACH FOR 3D URBAN DENSIFICATION DEVELOPMENTS

    Directory of Open Access Journals (Sweden)

    O. Koziatek

    2016-06-01

    Full Text Available With growing populations, economic pressures, and the need for sustainable practices, many urban regions are rapidly densifying developments in the vertical built dimension with mid- and high-rise buildings. The location of these buildings can be projected based on key factors that are attractive to urban planners, developers, and potential buyers. Current research in this area includes various modelling approaches, such as cellular automata and agent-based modelling, but the results are mostly linked to raster grids as the smallest spatial units that operate in two spatial dimensions. Therefore, the objective of this research is to develop a geospatial model that operates on irregular spatial tessellations to model mid- and high-rise buildings in three spatial dimensions (3D. The proposed model is based on the integration of GIS, fuzzy multi-criteria evaluation (MCE, and 3D GIS-based procedural modelling. Part of the City of Surrey, within the Metro Vancouver Region, Canada, has been used to present the simulations of the generated 3D building objects. The proposed 3D modelling approach was developed using ESRI’s CityEngine software and the Computer Generated Architecture (CGA language.

  13. Geospatial Modelling Approach for 3d Urban Densification Developments

    Science.gov (United States)

    Koziatek, O.; Dragićević, S.; Li, S.

    2016-06-01

    With growing populations, economic pressures, and the need for sustainable practices, many urban regions are rapidly densifying developments in the vertical built dimension with mid- and high-rise buildings. The location of these buildings can be projected based on key factors that are attractive to urban planners, developers, and potential buyers. Current research in this area includes various modelling approaches, such as cellular automata and agent-based modelling, but the results are mostly linked to raster grids as the smallest spatial units that operate in two spatial dimensions. Therefore, the objective of this research is to develop a geospatial model that operates on irregular spatial tessellations to model mid- and high-rise buildings in three spatial dimensions (3D). The proposed model is based on the integration of GIS, fuzzy multi-criteria evaluation (MCE), and 3D GIS-based procedural modelling. Part of the City of Surrey, within the Metro Vancouver Region, Canada, has been used to present the simulations of the generated 3D building objects. The proposed 3D modelling approach was developed using ESRI's CityEngine software and the Computer Generated Architecture (CGA) language.

  14. A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja E. M.

    2015-11-21

    Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  15. A systemic approach for modeling biological evolution using Parallel DEVS.

    Science.gov (United States)

    Heredia, Daniel; Sanz, Victorino; Urquia, Alfonso; Sandín, Máximo

    2015-08-01

    A new model for studying the evolution of living organisms is proposed in this manuscript. The proposed model is based on a non-neodarwinian systemic approach. The model is focused on considering several controversies and open discussions about modern evolutionary biology. Additionally, a simplification of the proposed model, named EvoDEVS, has been mathematically described using the Parallel DEVS formalism and implemented as a computer program using the DEVSLib Modelica library. EvoDEVS serves as an experimental platform to study different conditions and scenarios by means of computer simulations. Two preliminary case studies are presented to illustrate the behavior of the model and validate its results. EvoDEVS is freely available at http://www.euclides.dia.uned.es. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. Kinetic equations modelling wealth redistribution: a comparison of approaches.

    Science.gov (United States)

    Düring, Bertram; Matthes, Daniel; Toscani, Giuseppe

    2008-11-01

    Kinetic equations modelling the redistribution of wealth in simple market economies is one of the major topics in the field of econophysics. We present a unifying approach to the qualitative study for a large variety of such models, which is based on a moment analysis in the related homogeneous Boltzmann equation, and on the use of suitable metrics for probability measures. In consequence, we are able to classify the most important feature of the steady wealth distribution, namely the fatness of the Pareto tail, and the dynamical stability of the latter in terms of the model parameters. Our results apply, e.g., to the market model with risky investments [S. Cordier, L. Pareschi, and G. Toscani, J. Stat. Phys. 120, 253 (2005)], and to the model with quenched saving propensities [A. Chatterjee, B. K. Chakrabarti, and S. S. Manna, Physica A 335, 155 (2004)]. Also, we present results from numerical experiments that confirm the theoretical predictions.

  17. A Computationally Efficient State Space Approach to Estimating Multilevel Regression Models and Multilevel Confirmatory Factor Models.

    Science.gov (United States)

    Gu, Fei; Preacher, Kristopher J; Wu, Wei; Yung, Yiu-Fai

    2014-01-01

    Although the state space approach for estimating multilevel regression models has been well established for decades in the time series literature, it does not receive much attention from educational and psychological researchers. In this article, we (a) introduce the state space approach for estimating multilevel regression models and (b) extend the state space approach for estimating multilevel factor models. A brief outline of the state space formulation is provided and then state space forms for univariate and multivariate multilevel regression models, and a multilevel confirmatory factor model, are illustrated. The utility of the state space approach is demonstrated with either a simulated or real example for each multilevel model. It is concluded that the results from the state space approach are essentially identical to those from specialized multilevel regression modeling and structural equation modeling software. More importantly, the state space approach offers researchers a computationally more efficient alternative to fit multilevel regression models with a large number of Level 1 units within each Level 2 unit or a large number of observations on each subject in a longitudinal study.

  18. Building Energy Modeling: A Data-Driven Approach

    Science.gov (United States)

    Cui, Can

    Buildings consume nearly 50% of the total energy in the United States, which drives the need to develop high-fidelity models for building energy systems. Extensive methods and techniques have been developed, studied, and applied to building energy simulation and forecasting, while most of work have focused on developing dedicated modeling approach for generic buildings. In this study, an integrated computationally efficient and high-fidelity building energy modeling framework is proposed, with the concentration on developing a generalized modeling approach for various types of buildings. First, a number of data-driven simulation models are reviewed and assessed on various types of computationally expensive simulation problems. Motivated by the conclusion that no model outperforms others if amortized over diverse problems, a meta-learning based recommendation system for data-driven simulation modeling is proposed. To test the feasibility of the proposed framework on the building energy system, an extended application of the recommendation system for short-term building energy forecasting is deployed on various buildings. Finally, Kalman filter-based data fusion technique is incorporated into the building recommendation system for on-line energy forecasting. Data fusion enables model calibration to update the state estimation in real-time, which filters out the noise and renders more accurate energy forecast. The framework is composed of two modules: off-line model recommendation module and on-line model calibration module. Specifically, the off-line model recommendation module includes 6 widely used data-driven simulation models, which are ranked by meta-learning recommendation system for off-line energy modeling on a given building scenario. Only a selective set of building physical and operational characteristic features is needed to complete the recommendation task. The on-line calibration module effectively addresses system uncertainties, where data fusion on

  19. Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach

    Directory of Open Access Journals (Sweden)

    W. Bastiaan Kleijn

    2005-06-01

    Full Text Available Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel coding.

  20. A modal approach to modeling spatially distributed vibration energy dissipation.

    Energy Technology Data Exchange (ETDEWEB)

    Segalman, Daniel Joseph

    2010-08-01

    The nonlinear behavior of mechanical joints is a confounding element in modeling the dynamic response of structures. Though there has been some progress in recent years in modeling individual joints, modeling the full structure with myriad frictional interfaces has remained an obstinate challenge. A strategy is suggested for structural dynamics modeling that can account for the combined effect of interface friction distributed spatially about the structure. This approach accommodates the following observations: (1) At small to modest amplitudes, the nonlinearity of jointed structures is manifest primarily in the energy dissipation - visible as vibration damping; (2) Correspondingly, measured vibration modes do not change significantly with amplitude; and (3) Significant coupling among the modes does not appear to result at modest amplitudes. The mathematical approach presented here postulates the preservation of linear modes and invests all the nonlinearity in the evolution of the modal coordinates. The constitutive form selected is one that works well in modeling spatially discrete joints. When compared against a mathematical truth model, the distributed dissipation approximation performs well.

  1. Validation of models with constant bias: an applied approach

    Directory of Open Access Journals (Sweden)

    Salvador Medina-Peralta

    2014-06-01

    Full Text Available Objective. This paper presents extensions to the statistical validation method based on the procedure of Freese when a model shows constant bias (CB in its predictions and illustrate the method with data from a new mechanistic model that predict weight gain in cattle. Materials and methods. The extensions were the hypothesis tests and maximum anticipated error for the alternative approach, and the confidence interval for a quantile of the distribution of errors. Results. The model evaluated showed CB, once the CB is removed and with a confidence level of 95%, the magnitude of the error does not exceed 0.575 kg. Therefore, the validated model can be used to predict the daily weight gain of cattle, although it will require an adjustment in its structure based on the presence of CB to increase the accuracy of its forecasts. Conclusions. The confidence interval for the 1-α quantile of the distribution of errors after correcting the constant bias, allows determining the top limit for the magnitude of the error of prediction and use it to evaluate the evolution of the model in the forecasting of the system. The confidence interval approach to validate a model is more informative than the hypothesis tests for the same purpose.

  2. A database approach to information retrieval: The remarkable relationship between language models and region models

    CERN Document Server

    Hiemstra, Djoerd

    2010-01-01

    In this report, we unify two quite distinct approaches to information retrieval: region models and language models. Region models were developed for structured document retrieval. They provide a well-defined behaviour as well as a simple query language that allows application developers to rapidly develop applications. Language models are particularly useful to reason about the ranking of search results, and for developing new ranking approaches. The unified model allows application developers to define complex language modeling approaches as logical queries on a textual database. We show a remarkable one-to-one relationship between region queries and the language models they represent for a wide variety of applications: simple ad-hoc search, cross-language retrieval, video retrieval, and web search.

  3. Approach to Organizational Structure Modelling in Construction Companies

    Directory of Open Access Journals (Sweden)

    Ilin Igor V.

    2016-01-01

    Full Text Available Effective management system is one of the key factors of business success nowadays. Construction companies usually have a portfolio of independent projects running at the same time. Thus it is reasonable to take into account project orientation of such kind of business while designing the construction companies’ management system, which main components are business process system and organizational structure. The paper describes the management structure designing approach, based on the project-oriented nature of the construction projects, and propose a model of the organizational structure for the construction company. Application of the proposed approach will enable to assign responsibilities within the organizational structure in construction projects effectively and thus to shorten the time for projects allocation and to provide its smoother running. The practical case of using the approach also provided in the paper.

  4. An integrated modelling approach to estimate urban traffic emissions

    Science.gov (United States)

    Misra, Aarshabh; Roorda, Matthew J.; MacLean, Heather L.

    2013-07-01

    An integrated modelling approach is adopted to estimate microscale urban traffic emissions. The modelling framework consists of a traffic microsimulation model developed in PARAMICS, a microscopic emissions model (Comprehensive Modal Emissions Model), and two dispersion models, AERMOD and the Quick Urban and Industrial Complex (QUIC). This framework is applied to a traffic network in downtown Toronto, Canada to evaluate summer time morning peak traffic emissions of carbon monoxide (CO) and nitrogen oxides (NOx) during five weekdays at a traffic intersection. The model predicted results are validated against sensor observations with 100% of the AERMOD modelled CO concentrations and 97.5% of the QUIC modelled NOx concentrations within a factor of two of the corresponding observed concentrations. Availability of local estimates of ambient concentration is useful for accurate comparisons of predicted concentrations with observed concentrations. Predicted and sensor measured concentrations are significantly lower than the hourly threshold Maximum Acceptable Levels for CO (31 ppm, ˜90 times lower) and NO2 (0.4 mg/m3, ˜12 times lower), within the National Ambient Air Quality Objectives established by Environment Canada.

  5. A Novel Approach to Implement Takagi-Sugeno Fuzzy Models.

    Science.gov (United States)

    Chang, Chia-Wen; Tao, Chin-Wang

    2017-09-01

    This paper proposes new algorithms based on the fuzzy c-regressing model algorithm for Takagi-Sugeno (T-S) fuzzy modeling of the complex nonlinear systems. A fuzzy c-regression state model (FCRSM) algorithm is a T-S fuzzy model in which the functional antecedent and the state-space-model-type consequent are considered with the available input-output data. The antecedent and consequent forms of the proposed FCRSM consists mainly of two advantages: one is that the FCRSM has low computation load due to only one input variable is considered in the antecedent part; another is that the unknown system can be modeled to not only the polynomial form but also the state-space form. Moreover, the FCRSM can be extended to FCRSM-ND and FCRSM-Free algorithms. An algorithm FCRSM-ND is presented to find the T-S fuzzy state-space model of the nonlinear system when the input-output data cannot be precollected and an assumed effective controller is available. In the practical applications, the mathematical model of controller may be hard to be obtained. In this case, an online tuning algorithm, FCRSM-FREE, is designed such that the parameters of a T-S fuzzy controller and the T-S fuzzy state model of an unknown system can be online tuned simultaneously. Four numerical simulations are given to demonstrate the effectiveness of the proposed approach.

  6. Diagnosing Hybrid Systems: a Bayesian Model Selection Approach

    Science.gov (United States)

    McIlraith, Sheila A.

    2005-01-01

    In this paper we examine the problem of monitoring and diagnosing noisy complex dynamical systems that are modeled as hybrid systems-models of continuous behavior, interleaved by discrete transitions. In particular, we examine continuous systems with embedded supervisory controllers that experience abrupt, partial or full failure of component devices. Building on our previous work in this area (MBCG99;MBCG00), our specific focus in this paper ins on the mathematical formulation of the hybrid monitoring and diagnosis task as a Bayesian model tracking algorithm. The nonlinear dynamics of many hybrid systems present challenges to probabilistic tracking. Further, probabilistic tracking of a system for the purposes of diagnosis is problematic because the models of the system corresponding to failure modes are numerous and generally very unlikely. To focus tracking on these unlikely models and to reduce the number of potential models under consideration, we exploit logic-based techniques for qualitative model-based diagnosis to conjecture a limited initial set of consistent candidate models. In this paper we discuss alternative tracking techniques that are relevant to different classes of hybrid systems, focusing specifically on a method for tracking multiple models of nonlinear behavior simultaneously using factored sampling and conditional density propagation. To illustrate and motivate the approach described in this paper we examine the problem of monitoring and diganosing NASA's Sprint AERCam, a small spherical robotic camera unit with 12 thrusters that enable both linear and rotational motion.

  7. A Bayesian Approach for Structural Learning with Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Cen Li

    2002-01-01

    Full Text Available Hidden Markov Models(HMM have proved to be a successful modeling paradigm for dynamic and spatial processes in many domains, such as speech recognition, genomics, and general sequence alignment. Typically, in these applications, the model structures are predefined by domain experts. Therefore, the HMM learning problem focuses on the learning of the parameter values of the model to fit the given data sequences. However, when one considers other domains, such as, economics and physiology, model structure capturing the system dynamic behavior is not available. In order to successfully apply the HMM methodology in these domains, it is important that a mechanism is available for automatically deriving the model structure from the data. This paper presents a HMM learning procedure that simultaneously learns the model structure and the maximum likelihood parameter values of a HMM from data. The HMM model structures are derived based on the Bayesian model selection methodology. In addition, we introduce a new initialization procedure for HMM parameter value estimation based on the K-means clustering method. Experimental results with artificially generated data show the effectiveness of the approach.

  8. Systematic approach to verification and validation: High explosive burn models

    Energy Technology Data Exchange (ETDEWEB)

    Menikoff, Ralph [Los Alamos National Laboratory; Scovel, Christina A. [Los Alamos National Laboratory

    2012-04-16

    Most material models used in numerical simulations are based on heuristics and empirically calibrated to experimental data. For a specific model, key questions are determining its domain of applicability and assessing its relative merits compared to other models. Answering these questions should be a part of model verification and validation (V and V). Here, we focus on V and V of high explosive models. Typically, model developers implemented their model in their own hydro code and use different sets of experiments to calibrate model parameters. Rarely can one find in the literature simulation results for different models of the same experiment. Consequently, it is difficult to assess objectively the relative merits of different models. This situation results in part from the fact that experimental data is scattered through the literature (articles in journals and conference proceedings) and that the printed literature does not allow the reader to obtain data from a figure in electronic form needed to make detailed comparisons among experiments and simulations. In addition, it is very time consuming to set up and run simulations to compare different models over sufficiently many experiments to cover the range of phenomena of interest. The first difficulty could be overcome if the research community were to support an online web based database. The second difficulty can be greatly reduced by automating procedures to set up and run simulations of similar types of experiments. Moreover, automated testing would be greatly facilitated if the data files obtained from a database were in a standard format that contained key experimental parameters as meta-data in a header to the data file. To illustrate our approach to V and V, we have developed a high explosive database (HED) at LANL. It now contains a large number of shock initiation experiments. Utilizing the header information in a data file from HED, we have written scripts to generate an input file for a hydro code

  9. A Nonhydrostatic Model Based On A New Approach

    Science.gov (United States)

    Janjic, Z. I.

    Considerable experience with nonhydrostatic mo dels has been accumulated on the scales of convective clouds and storms. However, numerical weather prediction (NWP) deals with motions on a much wider range of temporal and spatial scales. Thus, difficulties that may not be significant on the small scales, may become important in NWP applications. Having in mind these considerations, a new approach has been proposed and applied in developing nonhydrostatic models intended for NWP applications. Namely, instead of extending the cloud models to synoptic scales, the hydrostatic approximation is relaxed in a hydrostatic NWP model. In this way the model validity is extended to nonhydrostatic motions, and at the same time favorable features of the hydrostatic formulation are preserved. In order to apply this approach, the system of nonhydrostatic equations is split into two parts: (a) the part that corresponds to the hydrostatic system, except for corrections due to vertical acceleration, and (b) the system of equations that allows computation of the corrections appearing in the first system. This procedure does not require any additional approximation. In the model, "isotropic" horizontal finite differencing is employed that conserves a number of basic and derived dynamical and quadratic quantities. The hybrid pressure-sigma vertical coordinate has been chosen as the primary option. The forward-backward scheme is used for horizontally propagating fast waves, and an implicit scheme is used for vertically propagating sound waves. The Adams- Bashforth scheme is applied for the advection of the basic dynamical variables and for the Coriolis terms. In real data runs, the nonhydrostatic dynamics does not require extra computational boundary conditions at the top. The philosophy of the physical package and possible future developments of physical parameterizations are also reviewed. A two-dimensional model based on the described approach successfully reproduced classical

  10. Infiltration under snow cover: Modeling approaches and predictive uncertainty

    Science.gov (United States)

    Meeks, Jessica; Moeck, Christian; Brunner, Philip; Hunkeler, Daniel

    2017-03-01

    Groundwater recharge from snowmelt represents a temporal redistribution of precipitation. This is extremely important because the rate and timing of snowpack drainage has substantial consequences to aquifer recharge patterns, which in turn affect groundwater availability throughout the rest of the year. The modeling methods developed to estimate drainage from a snowpack, which typically rely on temporally-dense point-measurements or temporally-limited spatially-dispersed calibration data, range in complexity from the simple degree-day method to more complex and physically-based energy balance approaches. While the gamut of snowmelt models are routinely used to aid in water resource management, a comparison of snowmelt models' predictive uncertainties had previously not been done. Therefore, we established a snowmelt model calibration dataset that is both temporally dense and represents the integrated snowmelt infiltration signal for the Vers Chez le Brandt research catchment, which functions as a rather unique natural lysimeter. We then evaluated the uncertainty associated with the degree-day, a modified degree-day and energy balance snowmelt model predictions using the null-space Monte Carlo approach. All three melt models underestimate total snowpack drainage, underestimate the rate of early and midwinter drainage and overestimate spring snowmelt rates. The actual rate of snowpack water loss is more constant over the course of the entire winter season than the snowmelt models would imply, indicating that mid-winter melt can contribute as significantly as springtime snowmelt to groundwater recharge in low alpine settings. Further, actual groundwater recharge could be between 2 and 31% greater than snowmelt models suggest, over the total winter season. This study shows that snowmelt model predictions can have considerable uncertainty, which may be reduced by the inclusion of more data that allows for the use of more complex approaches such as the energy balance

  11. Social model: a new approach of the disability theme.

    Science.gov (United States)

    Bampi, Luciana Neves da Silva; Guilhem, Dirce; Alves, Elioenai Dornelles

    2010-01-01

    The experience of disability is part of the daily lives of people who have a disease, lesion or corporal limitation. Disability is still understood as personal bad luck; moreover, from the social and political points of view, the disabled are seen as a minority. The aim of this study is to contribute to the knowledge about the experience of disability. The research presents a new approach on the theme: the social model. This approach appeared as an alternative to the medical model of disability, which sees the lesion as the primary cause of social inequality and of the disadvantages experienced by the disabled, ignoring the role of social structures in their oppression and marginalization. The study permits reflecting on how the difficulties and barriers society imposed on people considered different make disability a reality and portray social injustice and the vulnerability situation lived by excluded groups.

  12. Lattice percolation approach to 3D modeling of tissue aging

    Science.gov (United States)

    Gorshkov, Vyacheslav; Privman, Vladimir; Libert, Sergiy

    2016-11-01

    We describe a 3D percolation-type approach to modeling of the processes of aging and certain other properties of tissues analyzed as systems consisting of interacting cells. Lattice sites are designated as regular (healthy) cells, senescent cells, or vacancies left by dead (apoptotic) cells. The system is then studied dynamically with the ongoing processes including regular cell dividing to fill vacant sites, healthy cells becoming senescent or dying, and senescent cells dying. Statistical-mechanics description can provide patterns of time dependence and snapshots of morphological system properties. The developed theoretical modeling approach is found not only to corroborate recent experimental findings that inhibition of senescence can lead to extended lifespan, but also to confirm that, unlike 2D, in 3D senescent cells can contribute to tissue's connectivity/mechanical stability. The latter effect occurs by senescent cells forming the second infinite cluster in the regime when the regular (healthy) cell's infinite cluster still exists.

  13. Research on teacher education programs: logic model approach.

    Science.gov (United States)

    Newton, Xiaoxia A; Poon, Rebecca C; Nunes, Nicole L; Stone, Elisa M

    2013-02-01

    Teacher education programs in the United States face increasing pressure to demonstrate their effectiveness through pupils' learning gains in classrooms where program graduates teach. The link between teacher candidates' learning in teacher education programs and pupils' learning in K-12 classrooms implicit in the policy discourse suggests a one-to-one correspondence. However, the logical steps leading from what teacher candidates have learned in their programs to what they are doing in classrooms that may contribute to their pupils' learning are anything but straightforward. In this paper, we argue that the logic model approach from scholarship on evaluation can enhance research on teacher education by making explicit the logical links between program processes and intended outcomes. We demonstrate the usefulness of the logic model approach through our own work on designing a longitudinal study that focuses on examining the process and impact of an undergraduate mathematics and science teacher education program.

  14. A Variational Approach to the Modeling of MIMO Systems

    Directory of Open Access Journals (Sweden)

    Jraifi A

    2007-01-01

    Full Text Available Motivated by the study of the optimization of the quality of service for multiple input multiple output (MIMO systems in 3G (third generation, we develop a method for modeling MIMO channel . This method, which uses a statistical approach, is based on a variational form of the usual channel equation. The proposed equation is given by with scalar variable . Minimum distance of received vectors is used as the random variable to model MIMO channel. This variable is of crucial importance for the performance of the transmission system as it captures the degree of interference between neighbors vectors. Then, we use this approach to compute numerically the total probability of errors with respect to signal-to-noise ratio (SNR and then predict the numbers of antennas. By fixing SNR variable to a specific value, we extract informations on the optimal numbers of MIMO antennas.

  15. A relaxation-based approach to damage modeling

    Science.gov (United States)

    Junker, Philipp; Schwarz, Stephan; Makowski, Jerzy; Hackl, Klaus

    2017-01-01

    Material models, including softening effects due to, for example, damage and localizations, share the problem of ill-posed boundary value problems that yield mesh-dependent finite element results. It is thus necessary to apply regularization techniques that couple local behavior described, for example, by internal variables, at a spatial level. This can take account of the gradient of the internal variable to yield mesh-independent finite element results. In this paper, we present a new approach to damage modeling that does not use common field functions, inclusion of gradients or complex integration techniques: Appropriate modifications of the relaxed (condensed) energy hold the same advantage as other methods, but with much less numerical effort. We start with the theoretical derivation and then discuss the numerical treatment. Finally, we present finite element results that prove empirically how the new approach works.

  16. Coordination-theoretic approach to modelling grid service composition process

    Institute of Scientific and Technical Information of China (English)

    Meng Qian; Zhong Liu; Jing Wang; Li Yao; Weiming Zhang

    2010-01-01

    A grid service composite process is made up of complex coordinative activities.Developing the appropriate model of grid service coordinative activities is an important foundation for the grid service composition.According to the coordination theory,this paper elaborates the process of the grid service composition by using UML 2.0,and proposes an approach to modelling the grid service composition process based on the coordination theory.This approach helps not only to analyze accurately the task activities and relevant dependencies among task activities,but also to facilitate the adaptability of the grid service orchestration to further realize the connectivity,timeliness,appropriateness and expansibility of the grid service composition.

  17. Innovation Networks New Approaches in Modelling and Analyzing

    CERN Document Server

    Pyka, Andreas

    2009-01-01

    The science of graphs and networks has become by now a well-established tool for modelling and analyzing a variety of systems with a large number of interacting components. Starting from the physical sciences, applications have spread rapidly to the natural and social sciences, as well as to economics, and are now further extended, in this volume, to the concept of innovations, viewed broadly. In an abstract, systems-theoretical approach, innovation can be understood as a critical event which destabilizes the current state of the system, and results in a new process of self-organization leading to a new stable state. The contributions to this anthology address different aspects of the relationship between innovation and networks. The various chapters incorporate approaches in evolutionary economics, agent-based modeling, social network analysis and econophysics and explore the epistemic tension between insights into economics and society-related processes, and the insights into new forms of complex dynamics.

  18. Understanding complex urban systems multidisciplinary approaches to modeling

    CERN Document Server

    Gurr, Jens; Schmidt, J

    2014-01-01

    Understanding Complex Urban Systems takes as its point of departure the insight that the challenges of global urbanization and the complexity of urban systems cannot be understood – let alone ‘managed’ – by sectoral and disciplinary approaches alone. But while there has recently been significant progress in broadening and refining the methodologies for the quantitative modeling of complex urban systems, in deepening the theoretical understanding of cities as complex systems, or in illuminating the implications for urban planning, there is still a lack of well-founded conceptual thinking on the methodological foundations and the strategies of modeling urban complexity across the disciplines. Bringing together experts from the fields of urban and spatial planning, ecology, urban geography, real estate analysis, organizational cybernetics, stochastic optimization, and literary studies, as well as specialists in various systems approaches and in transdisciplinary methodologies of urban analysis, the volum...

  19. A Composite Modelling Approach to Decision Support by the Use of the CBA-DK Model

    DEFF Research Database (Denmark)

    Barfod, Michael Bruhn; Salling, Kim Bang; Leleur, Steen

    2007-01-01

    This paper presents a decision support system for assessment of transport infrastructure projects. The composite modelling approach, COSIMA, combines a cost-benefit analysis by use of the CBA-DK model with multi-criteria analysis applying the AHP and SMARTER techniques. The modelling uncertainties...

  20. CFD Modeling of Wall Steam Condensation: Two-Phase Flow Approach versus Homogeneous Flow Approach

    Directory of Open Access Journals (Sweden)

    S. Mimouni

    2011-01-01

    Full Text Available The present work is focused on the condensation heat transfer that plays a dominant role in many accident scenarios postulated to occur in the containment of nuclear reactors. The study compares a general multiphase approach implemented in NEPTUNE_CFD with a homogeneous model, of widespread use for engineering studies, implemented in Code_Saturne. The model implemented in NEPTUNE_CFD assumes that liquid droplets form along the wall within nucleation sites. Vapor condensation on droplets makes them grow. Once the droplet diameter reaches a critical value, gravitational forces compensate surface tension force and then droplets slide over the wall and form a liquid film. This approach allows taking into account simultaneously the mechanical drift between the droplet and the gas, the heat and mass transfer on droplets in the core of the flow and the condensation/evaporation phenomena on the walls. As concern the homogeneous approach, the motion of the liquid film due to the gravitational forces is neglected, as well as the volume occupied by the liquid. Both condensation models and compressible procedures are validated and compared to experimental data provided by the TOSQAN ISP47 experiment (IRSN Saclay. Computational results compare favorably with experimental data, particularly for the Helium and steam volume fractions.

  1. An interdisciplinary approach for earthquake modelling and forecasting

    Science.gov (United States)

    Han, P.; Zhuang, J.; Hattori, K.; Ogata, Y.

    2016-12-01

    Earthquake is one of the most serious disasters, which may cause heavy casualties and economic losses. Especially in the past two decades, huge/mega earthquakes have hit many countries. Effective earthquake forecasting (including time, location, and magnitude) becomes extremely important and urgent. To date, various heuristically derived algorithms have been developed for forecasting earthquakes. Generally, they can be classified into two types: catalog-based approaches and non-catalog-based approaches. Thanks to the rapid development of statistical seismology in the past 30 years, now we are able to evaluate the performances of these earthquake forecast approaches quantitatively. Although a certain amount of precursory information is available in both earthquake catalogs and non-catalog observations, the earthquake forecast is still far from satisfactory. In most case, the precursory phenomena were studied individually. An earthquake model that combines self-exciting and mutually exciting elements was developed by Ogata and Utsu from the Hawkes process. The core idea of this combined model is that the status of the event at present is controlled by the event itself (self-exciting) and all the external factors (mutually exciting) in the past. In essence, the conditional intensity function is a time-varying Poisson process with rate λ(t), which is composed of the background rate, the self-exciting term (the information from past seismic events), and the external excitation term (the information from past non-seismic observations). This model shows us a way to integrate the catalog-based forecast and non-catalog-based forecast. Against this background, we are trying to develop a new earthquake forecast model which combines catalog-based and non-catalog-based approaches.

  2. A NEW APPROACH OF DIGITAL BRIDGE SURFACE MODEL GENERATION

    OpenAIRE

    Ju, H.

    2012-01-01

    Bridge areas present difficulties for orthophotos generation and to avoid “collapsed” bridges in the orthoimage, operator assistance is required to create the precise DBM (Digital Bridge Model), which is, subsequently, used for the orthoimage generation. In this paper, a new approach of DBM generation, based on fusing LiDAR (Light Detection And Ranging) data and aerial imagery, is proposed. The no precise exterior orientation of the aerial image is required for the DBM generation. First, a co...

  3. A Conditional Approach to Panel Data Models with Common Shocks

    Directory of Open Access Journals (Sweden)

    Giovanni Forchini

    2016-01-01

    Full Text Available This paper studies the effects of common shocks on the OLS estimators of the slopes’ parameters in linear panel data models. The shocks are assumed to affect both the errors and some of the explanatory variables. In contrast to existing approaches, which rely on using results on martingale difference sequences, our method relies on conditional strong laws of large numbers and conditional central limit theorems for conditionally-heterogeneous random variables.

  4. Modeling software with finite state machines a practical approach

    CERN Document Server

    Wagner, Ferdinand; Wagner, Thomas; Wolstenholme, Peter

    2006-01-01

    Modeling Software with Finite State Machines: A Practical Approach explains how to apply finite state machines to software development. It provides a critical analysis of using finite state machines as a foundation for executable specifications to reduce software development effort and improve quality. This book discusses the design of a state machine and of a system of state machines. It also presents a detailed analysis of development issues relating to behavior modeling with design examples and design rules for using finite state machines. This volume describes a coherent and well-tested fr

  5. Ionization coefficient approach to modeling breakdown in nonuniform geometries.

    Energy Technology Data Exchange (ETDEWEB)

    Warne, Larry Kevin; Jorgenson, Roy Eberhardt; Nicolaysen, Scott D.

    2003-11-01

    This report summarizes the work on breakdown modeling in nonuniform geometries by the ionization coefficient approach. Included are: (1) fits to primary and secondary ionization coefficients used in the modeling; (2) analytical test cases for sphere-to-sphere, wire-to-wire, corner, coaxial, and rod-to-plane geometries; a compilation of experimental data with source references; comparisons between code results, test case results, and experimental data. A simple criterion is proposed to differentiate between corona and spark. The effect of a dielectric surface on avalanche growth is examined by means of Monte Carlo simulations. The presence of a clean dry surface does not appear to enhance growth.

  6. A Data Mining Approach to Modelling of Water Supply Assets

    DEFF Research Database (Denmark)

    Babovic, V.; Drecourt, J.; Keijzer, M.

    2002-01-01

    supply assets are mainly situated underground, and therefore not visible and under the influence of various highly unpredictable forces. This paper proposes the use of advanced data mining methods in order to determine the risks of pipe bursts. For example, analysis of the database of already occurred...... with the choice of pipes to be replaced, the outlined approach opens completely new avenues in asset modelling. The condition of an asset such as a water supply network deteriorates with age. With reliable risk models, addressing the evolution of risk with aging asset, it is now possible to plan optimal...

  7. AN APPROACH IN MODELING TWO-DIMENSIONAL PARTIALLY CAVITATING FLOW

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    An approach of modeling viscosity, unsteady partially cavitating flows around lifting bodies is presented. By employing an one-fluid Navier-Stokers solver, the algorithm is proved to be able to handle two-dimensional laminar cavitating flows at moderate Reynolds number. Based on the state equation of water-vapor mixture, the constructive relations of densities and pressures are established. To numerically simulate the cavity wall, different pseudo transition of density models are presumed. The finite-volume method is adopted and the algorithm can be extended to three-dimensional cavitating flows.

  8. A transformation approach to modelling multi-modal diffusions

    DEFF Research Database (Denmark)

    Forman, Julie Lyng; Sørensen, Michael

    2014-01-01

    when the diffusion is observed with additional measurement error. The new approach is applied to molecular dynamics data in the form of a reaction coordinate of the small Trp-zipper protein, from which the folding and unfolding rates of the protein are estimated. Because the diffusion coefficient...... is state-dependent, the new models provide a better fit to this type of protein folding data than the previous models with a constant diffusion coefficient, particularly when the effect of errors with a short time-scale is taken into account....

  9. THE SIGNAL APPROACH TO MODELLING THE BALANCE OF PAYMENT CRISIS

    Directory of Open Access Journals (Sweden)

    O. Chernyak

    2016-12-01

    Full Text Available The paper considers and presents synthesis of theoretical models of balance of payment crisis and investigates the most effective ways to model the crisis in Ukraine. For mathematical formalization of balance of payment crisis, comparative analysis of the effectiveness of different calculation methods of Exchange Market Pressure Index was performed. A set of indicators that signal the growing likelihood of balance of payments crisis was defined using signal approach. With the help of minimization function thresholds indicators were selected, the crossing of which signalize increase in the probability of balance of payment crisis.

  10. Laser modeling a numerical approach with algebra and calculus

    CERN Document Server

    Csele, Mark Steven

    2014-01-01

    Offering a fresh take on laser engineering, Laser Modeling: A Numerical Approach with Algebra and Calculus presents algebraic models and traditional calculus-based methods in tandem to make concepts easier to digest and apply in the real world. Each technique is introduced alongside a practical, solved example based on a commercial laser. Assuming some knowledge of the nature of light, emission of radiation, and basic atomic physics, the text:Explains how to formulate an accurate gain threshold equation as well as determine small-signal gainDiscusses gain saturation and introduces a novel pass

  11. Noether symmetry approach in f(R)-tachyon model

    Energy Technology Data Exchange (ETDEWEB)

    Jamil, Mubasher, E-mail: mjamil@camp.nust.edu.pk [Center for Advanced Mathematics and Physics (CAMP), National University of Sciences and Technology (NUST), H-12, Islamabad (Pakistan); Mahomed, F.M., E-mail: Fazal.Mahomed@wits.ac.za [Centre for Differential Equations, Continuum Mechanics and Applications, School of Computational and Applied Mathematics, University of the Witwatersrand, Wits 2050 (South Africa); Momeni, D., E-mail: d.momeni@yahoo.com [Department of Physics, Faculty of Sciences, Tarbiat Moa' llem University, Tehran (Iran, Islamic Republic of)

    2011-08-26

    In this Letter by utilizing the Noether symmetry approach in cosmology, we attempt to find the tachyon potential via the application of this kind of symmetry to a flat Friedmann-Robertson-Walker (FRW) metric. We reduce the system of equations to simpler ones and obtain the general class of the tachyon's potential function and f(R) functions. We have found that the Noether symmetric model results in a power law f(R) and an inverse fourth power potential for the tachyonic field. Further we investigate numerically the cosmological evolution of our model and show explicitly the behavior of the equation of state crossing the cosmological constant boundary.

  12. Surrogate based approaches to parameter inference in ocean models

    KAUST Repository

    Knio, Omar

    2016-01-06

    This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.

  13. Modeling fabrication of nuclear components: An integrative approach

    Energy Technology Data Exchange (ETDEWEB)

    Hench, K.W.

    1996-08-01

    Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components in an environment of intense regulation and shrinking budgets. This dissertation presents an integrative two-stage approach to modeling the casting operation for fabrication of nuclear weapon primary components. The first stage optimizes personnel radiation exposure for the casting operation layout by modeling the operation as a facility layout problem formulated as a quadratic assignment problem. The solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units.

  14. Injury prevention risk communication: A mental models approach

    DEFF Research Database (Denmark)

    Austin, Laurel Cecelia; Fischhoff, Baruch

    2012-01-01

    Individuals' decisions and behaviour can play a critical role in determining both the probability and severity of injury. Behavioural decision research studies peoples' decision-making processes in terms comparable to scientific models of optimal choices, providing a basis for focusing...... interventions on the most critical opportunities to reduce risks. That research often seeks to identify the ‘mental models’ that underlie individuals' interpretations of their circumstances and the outcomes of possible actions. In the context of injury prevention, a mental models approach would ask why people...... and create an expert model of the risk situation, interviewing lay people to elicit their comparable mental models, and developing and evaluating communication interventions designed to close the gaps between lay people and experts. This paper reviews the theory and method behind this research stream...

  15. An Integrated Approach to Flexible Modelling and Animated Simulation

    Institute of Scientific and Technical Information of China (English)

    Li Shuliang; Wu Zhenye

    1994-01-01

    Based on the software support of SIMAN/CINEMA, this paper presents an integrated approach to flexible modelling and simulation with animation. The methodology provides a structured way of integrating mathematical and logical model, statistical experinentation, and statistical analysis with computer animation. Within this methodology, an animated simulation study is separated into six different activities: simulation objectives identification , system model development, simulation experiment specification, animation layout construction, real-time simulation and animation run, and output data analysis. These six activities are objectives driven, relatively independent, and integrate through software organization and simulation files. The key ideas behind this methodology are objectives orientation, modelling flexibility,simulation and animation integration, and application tailorability. Though the methodology is closely related to SIMAN/CINEMA, it can be extended to other software environments.

  16. A reservoir simulation approach for modeling of naturally fractured reservoirs

    Directory of Open Access Journals (Sweden)

    H. Mohammadi

    2012-12-01

    Full Text Available In this investigation, the Warren and Root model proposed for the simulation of naturally fractured reservoir was improved. A reservoir simulation approach was used to develop a 2D model of a synthetic oil reservoir. Main rock properties of each gridblock were defined for two different types of gridblocks called matrix and fracture gridblocks. These two gridblocks were different in porosity and permeability values which were higher for fracture gridblocks compared to the matrix gridblocks. This model was solved using the implicit finite difference method. Results showed an improvement in the Warren and Root model especially in region 2 of the semilog plot of pressure drop versus time, which indicated a linear transition zone with no inflection point as predicted by other investigators. Effects of fracture spacing, fracture permeability, fracture porosity, matrix permeability and matrix porosity on the behavior of a typical naturally fractured reservoir were also presented.

  17. Model-based approach for elevator performance estimation

    Science.gov (United States)

    Esteban, E.; Salgado, O.; Iturrospe, A.; Isasa, I.

    2016-02-01

    In this paper, a dynamic model for an elevator installation is presented in the state space domain. The model comprises both the mechanical and the electrical subsystems, including the electrical machine and a closed-loop field oriented control. The proposed model is employed for monitoring the condition of the elevator installation. The adopted model-based approach for monitoring employs the Kalman filter as an observer. A Kalman observer estimates the elevator car acceleration, which determines the elevator ride quality, based solely on the machine control signature and the encoder signal. Finally, five elevator key performance indicators are calculated based on the estimated car acceleration. The proposed procedure is experimentally evaluated, by comparing the key performance indicators calculated based on the estimated car acceleration and the values obtained from actual acceleration measurements in a test bench. Finally, the proposed procedure is compared with the sliding mode observer.

  18. A Model Independent Approach to (p)Reheating

    CERN Document Server

    Özsoy, Ogan; Sinha, Kuver; Watson, Scott

    2015-01-01

    In this note we propose a model independent framework for inflationary (p)reheating. Our approach is analogous to the Effective Field Theory of Inflation, however here the inflaton oscillations provide an additional source of (discrete) symmetry breaking. Using the Goldstone field that non-linearly realizes time diffeormorphism invariance we construct a model independent action for both the inflaton and reheating sectors. Utilizing the hierarchy of scales present during the reheating process we are able to recover known results in the literature in a simpler fashion, including the presence of oscillations in the primordial power spectrum. We also construct a class of models where the shift symmetry of the inflaton is preserved during reheating, which helps alleviate past criticisms of (p)reheating in models of Natural Inflation. Extensions of our framework suggest the possibility of analytically investigating non-linear effects (such as rescattering and back-reaction) during thermalization without resorting t...

  19. A model-based approach to human identification using ECG

    Science.gov (United States)

    Homer, Mark; Irvine, John M.; Wendelken, Suzanne

    2009-05-01

    Biometrics, such as fingerprint, iris scan, and face recognition, offer methods for identifying individuals based on a unique physiological measurement. Recent studies indicate that a person's electrocardiogram (ECG) may also provide a unique biometric signature. Current techniques for identification using ECG rely on empirical methods for extracting features from the ECG signal. This paper presents an alternative approach based on a time-domain model of the ECG trace. Because Auto-Regressive Integrated Moving Average (ARIMA) models form a rich class of descriptors for representing the structure of periodic time series data, they are well-suited to characterizing the ECG signal. We present a method for modeling the ECG, extracting features from the model representation, and identifying individuals using these features.

  20. Computer Modeling of Violent Intent: A Content Analysis Approach

    Energy Technology Data Exchange (ETDEWEB)

    Sanfilippo, Antonio P.; Mcgrath, Liam R.; Bell, Eric B.

    2014-01-03

    We present a computational approach to modeling the intent of a communication source representing a group or an individual to engage in violent behavior. Our aim is to identify and rank aspects of radical rhetoric that are endogenously related to violent intent to predict the potential for violence as encoded in written or spoken language. We use correlations between contentious rhetoric and the propensity for violent behavior found in documents from radical terrorist and non-terrorist groups and individuals to train and evaluate models of violent intent. We then apply these models to unseen instances of linguistic behavior to detect signs of contention that have a positive correlation with violent intent factors. Of particular interest is the application of violent intent models to social media, such as Twitter, that have proved to serve as effective channels in furthering sociopolitical change.

  1. A Model-based Prognostics Approach Applied to Pneumatic Valves

    Directory of Open Access Journals (Sweden)

    Matthew J. Daigle

    2011-01-01

    Full Text Available Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain knowledge of the system, its components, and how they fail by casting the underlying physical phenomena in a physics-based model that is derived from first principles. Uncertainty cannot be avoided in prediction, therefore, algorithms are employed that help in managing these uncertainties. The particle filtering algorithm has become a popular choice for model-based prognostics due to its wide applicability, ease of implementation, and support for uncertainty management. We develop a general model-based prognostics methodology within a robust probabilistic framework using particle filters. As a case study, we consider a pneumatic valve from the Space Shuttle cryogenic refueling system. We develop a detailed physics-based model of the pneumatic valve, and perform comprehensive simulation experiments to illustrate our prognostics approach and evaluate its effectiveness and robustness. The approach is demonstrated using historical pneumatic valve data from the refueling system.

  2. A Model-Based Prognostics Approach Applied to Pneumatic Valves

    Science.gov (United States)

    Daigle, Matthew J.; Goebel, Kai

    2011-01-01

    Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain knowledge of the system, its components, and how they fail by casting the underlying physical phenomena in a physics-based model that is derived from first principles. Uncertainty cannot be avoided in prediction, therefore, algorithms are employed that help in managing these uncertainties. The particle filtering algorithm has become a popular choice for model-based prognostics due to its wide applicability, ease of implementation, and support for uncertainty management. We develop a general model-based prognostics methodology within a robust probabilistic framework using particle filters. As a case study, we consider a pneumatic valve from the Space Shuttle cryogenic refueling system. We develop a detailed physics-based model of the pneumatic valve, and perform comprehensive simulation experiments to illustrate our prognostics approach and evaluate its effectiveness and robustness. The approach is demonstrated using historical pneumatic valve data from the refueling system.

  3. Systems pharmacology modeling: an approach to improving drug safety.

    Science.gov (United States)

    Bai, Jane P F; Fontana, Robert J; Price, Nathan D; Sangar, Vineet

    2014-01-01

    Advances in systems biology in conjunction with the expansion in knowledge of drug effects and diseases present an unprecedented opportunity to extend traditional pharmacokinetic and pharmacodynamic modeling/analysis to conduct systems pharmacology modeling. Many drugs that cause liver injury and myopathies have been studied extensively. Mitochondrion-centric systems pharmacology modeling is important since drug toxicity across a large number of pharmacological classes converges to mitochondrial injury and death. Approaches to systems pharmacology modeling of drug effects need to consider drug exposure, organelle and cellular phenotypes across all key cell types of human organs, organ-specific clinical biomarkers/phenotypes, gene-drug interaction and immune responses. Systems modeling approaches, that leverage the knowledge base constructed from curating a selected list of drugs across a wide range of pharmacological classes, will provide a critically needed blueprint for making informed decisions to reduce the rate of attrition for drugs in development and increase the number of drugs with an acceptable benefit/risk ratio.

  4. Social Network Analyses and Nutritional Behavior: An Integrated Modeling Approach

    Directory of Open Access Journals (Sweden)

    Alistair McNair Senior

    2016-01-01

    Full Text Available Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent advances in nutrition research, combining state-space models of nutritional geometry with agent-based models of systems biology, show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a tangible and practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit agent-based models that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition. Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interaction in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.

  5. THE MODEL OF EXTERNSHIP ORGANIZATION FOR FUTURE TEACHERS: QUALIMETRIC APPROACH

    Directory of Open Access Journals (Sweden)

    Taisiya A. Isaeva

    2015-01-01

    Full Text Available The aim of the paper is to present author’s model for bachelors – future teachers of vocational training. The model is been worked out from the standpoint of qualimetric approach and provides a pedagogical training.Methods. The process is based on the literature analysis of externship organization for students in higher education and includes the SWOT-analysis techniques in pedagogical training. The method of group expert evaluation is the main method of pedagogical qualimetry. Structural components of professional pedagogical competency of students-future teachers are defined. It allows us to determine a development level and criterion of estimation on mastering programme «Vocational training (branch-wise».Results. This article interprets the concept «pedagogical training»; its basic organization principles during students’ practice are stated. The methods of expert group formation are presented: self-assessment and personal data.Scientific novelty. The externship organization model for future teachers is developed. This model is based on pedagogical training, using qualimetric approach and the SWOT-analysis techniques. Proposed criterion-assessment procedures are managed to determine the developing levels of professional and pedagogical competency.Practical significance. The model is introduced into pedagogical training of educational process of Kalashnikov’s Izhevsk State Technical University, and can be used in other similar educational establishments.

  6. Spatiotemporal infectious disease modeling: a BME-SIR approach.

    Science.gov (United States)

    Angulo, Jose; Yu, Hwa-Lung; Langousis, Andrea; Kolovos, Alexander; Wang, Jinfeng; Madrid, Ana Esther; Christakos, George

    2013-01-01

    This paper is concerned with the modeling of infectious disease spread in a composite space-time domain under conditions of uncertainty. We focus on stochastic modeling that accounts for basic mechanisms of disease distribution and multi-sourced in situ uncertainties. Starting from the general formulation of population migration dynamics and the specification of transmission and recovery rates, the model studies the functional formulation of the evolution of the fractions of susceptible-infected-recovered individuals. The suggested approach is capable of: a) modeling population dynamics within and across localities, b) integrating the disease representation (i.e. susceptible-infected-recovered individuals) with observation time series at different geographical locations and other sources of information (e.g. hard and soft data, empirical relationships, secondary information), and c) generating predictions of disease spread and associated parameters in real time, while considering model and observation uncertainties. Key aspects of the proposed approach are illustrated by means of simulations (i.e. synthetic studies), and a real-world application using hand-foot-mouth disease (HFMD) data from China.

  7. Approaching the other: Investigation of a descriptive belief revision model

    Directory of Open Access Journals (Sweden)

    Spyridon Stelios

    2016-12-01

    Full Text Available When an individual—a hearer—is confronted with an opinion expressed by another individual—a speaker—differing from her only in terms of a degree of belief, how will she react? In trying to answer that question this paper reintroduces and investigates a descriptive belief revision model designed to measure approaches. Parameters of the model are the hearer’s credibility account of the speaker, the initial difference between the hearer’s and speaker’s degrees of belief, and the hearer’s resistance to change. Within an interdisciplinary framework, two empirical studies were conducted. A comparison was carried out between empirically recorded revisions and revisions according to the model. Results showed that the theoretical model is highly confirmed. An interesting finding is the measurement of an “unexplainable behaviour” that is not classified either as repulsion or as approach. At a second level of analysis, the model is compared to the Bayesian framework of inference. Structural differences and evidence for optimal descriptive adequacy of the former were highlighted.

  8. Generalized linear models with coarsened covariates: a practical Bayesian approach.

    Science.gov (United States)

    Johnson, Timothy R; Wiest, Michelle M

    2014-06-01

    Coarsened covariates are a common and sometimes unavoidable phenomenon encountered in statistical modeling. Covariates are coarsened when their values or categories have been grouped. This may be done to protect privacy or to simplify data collection or analysis when researchers are not aware of their drawbacks. Analyses with coarsened covariates based on ad hoc methods can compromise the validity of inferences. One valid method for accounting for a coarsened covariate is to use a marginal likelihood derived by summing or integrating over the unknown realizations of the covariate. However, algorithms for estimation based on this approach can be tedious to program and can be computationally expensive. These are significant obstacles to their use in practice. To overcome these limitations, we show that when expressed as a Bayesian probability model, a generalized linear model with a coarsened covariate can be posed as a tractable missing data problem where the missing data are due to censoring. We also show that this model is amenable to widely available general-purpose software for simulation-based inference for Bayesian probability models, providing researchers a very practical approach for dealing with coarsened covariates.

  9. Object-Oriented Approach to Modeling Units of Pneumatic Systems

    Directory of Open Access Journals (Sweden)

    Yu. V. Kyurdzhiev

    2014-01-01

    Full Text Available The article shows the relevance of the approaches to the object-oriented programming when modeling the pneumatic units (PU.Based on the analysis of the calculation schemes of aggregates pneumatic systems two basic objects, namely a cavity flow and a material point were highlighted.Basic interactions of objects are defined. Cavity-cavity interaction: ex-change of matter and energy with the flows of mass. Cavity-point interaction: force interaction, exchange of energy in the form of operation. Point-point in-teraction: force interaction, elastic interaction, inelastic interaction, and inter-vals of displacement.The authors have developed mathematical models of basic objects and interactions. Models and interaction of elements are implemented in the object-oriented programming.Mathematical models of elements of PU design scheme are implemented in derived from the base class. These classes implement the models of flow cavity, piston, diaphragm, short channel, diaphragm to be open by a given law, spring, bellows, elastic collision, inelastic collision, friction, PU stages with a limited movement, etc.A numerical integration of differential equations for the mathematical models of PU design scheme elements is based on the Runge-Kutta method of the fourth order. On request each class performs a tact of integration i.e. calcu-lation of the coefficient method.The paper presents an integration algorithm of the system of differential equations. All objects of the PU design scheme are placed in a unidirectional class list. Iterator loop cycle initiates the integration tact of all the objects in the list. One in four iteration makes a transition to the next step of integration. Calculation process stops when any object shows a shutdowns flag.The proposed approach was tested in the calculation of a number of PU designs. With regard to traditional approaches to modeling, the authors-proposed method features in easy enhancement, code reuse, high reliability

  10. Modeling drug- and chemical- induced hepatotoxicity with systems biology approaches

    Directory of Open Access Journals (Sweden)

    Sudin eBhattacharya

    2012-12-01

    Full Text Available We provide an overview of computational systems biology approaches as applied to the study of chemical- and drug-induced toxicity. The concept of ‘toxicity pathways’ is described in the context of the 2007 US National Academies of Science report, Toxicity testing in the 21st Century: A Vision and A Strategy. Pathway mapping and modeling based on network biology concepts are a key component of the vision laid out in this report for a more biologically-based analysis of dose-response behavior and the safety of chemicals and drugs. We focus on toxicity of the liver (hepatotoxicity – a complex phenotypic response with contributions from a number of different cell types and biological processes. We describe three case studies of complementary multi-scale computational modeling approaches to understand perturbation of toxicity pathways in the human liver as a result of exposure to environmental contaminants and specific drugs. One approach involves development of a spatial, multicellular virtual tissue model of the liver lobule that combines molecular circuits in individual hepatocytes with cell-cell interactions and blood-mediated transport of toxicants through hepatic sinusoids, to enable quantitative, mechanistic prediction of hepatic dose-response for activation of the AhR toxicity pathway. Simultaneously, methods are being developing to extract quantitative maps of intracellular signaling and transcriptional regulatory networks perturbed by environmental contaminants, using a combination of gene expression and genome-wide protein-DNA interaction data. A predictive physiological model (DILIsymTM to understand drug-induced liver injury (DILI, the most common adverse event leading to termination of clinical development programs and regulatory actions on drugs, is also described. The model initially focuses on reactive metabolite-induced DILI in response to administration of acetaminophen, and spans multiple biological scales.

  11. A Statistical Approach For Modeling Tropical Cyclones. Synthetic Hurricanes Generator Model

    Energy Technology Data Exchange (ETDEWEB)

    Pasqualini, Donatella [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-11

    This manuscript brie y describes a statistical ap- proach to generate synthetic tropical cyclone tracks to be used in risk evaluations. The Synthetic Hur- ricane Generator (SynHurG) model allows model- ing hurricane risk in the United States supporting decision makers and implementations of adaptation strategies to extreme weather. In the literature there are mainly two approaches to model hurricane hazard for risk prediction: deterministic-statistical approaches, where the storm key physical parameters are calculated using physi- cal complex climate models and the tracks are usually determined statistically from historical data; and sta- tistical approaches, where both variables and tracks are estimated stochastically using historical records. SynHurG falls in the second category adopting a pure stochastic approach.

  12. ABOUT COMPLEX APPROACH TO MODELLING OF TECHNOLOGICAL MACHINES FUNCTIONING

    Directory of Open Access Journals (Sweden)

    A. A. Honcharov

    2015-01-01

    Full Text Available Problems arise in the process of designing, production and investigation of a complicated technological machine. These problems concern not only properties of some types of equipment but they have respect to regularities of control object functioning as a whole. A technological machine is thought of as such technological complex where it is possible to lay emphasis on a control system (or controlling device and a controlled object. The paper analyzes a number of existing approaches to construction of models for controlling devices and their functioning. A complex model for a technological machine operation has been proposed in the paper; in other words it means functioning of a controlling device and a controlled object of the technological machine. In this case models of the controlling device and the controlled object of the technological machine can be represented as aggregate combination (elements of these models. The paper describes a conception on realization of a complex model for a technological machine as a model for interaction of units (elements in the controlling device and the controlled object. When a control activation is given to the controlling device of the technological machine its modelling is executed at an algorithmic or logic level and the obtained output signals are interpreted as events and information about them is transferred to executive mechanisms.The proposed scheme of aggregate integration considers element models as object classes and the integration scheme is presented as a combination of object property values (combination of a great many input and output contacts and combination of object interactions (in the form of an integration operator. Spawn of parent object descendants of the technological machine model and creation of their copies in various project parts is one of the most important means of the distributed technological machine modelling that makes it possible to develop complicated models of

  13. Quantitative versus qualitative modeling: a complementary approach in ecosystem study.

    Science.gov (United States)

    Bondavalli, C; Favilla, S; Bodini, A

    2009-02-01

    Natural disturbance or human perturbation act upon ecosystems by changing some dynamical parameters of one or more species. Foreseeing these modifications is necessary before embarking on an intervention: predictions may help to assess management options and define hypothesis for interventions. Models become valuable tools for studying and making predictions only when they capture types of interactions and their magnitude. Quantitative models are more precise and specific about a system, but require a large effort in model construction. Because of this very often ecological systems remain only partially specified and one possible approach to their description and analysis comes from qualitative modelling. Qualitative models yield predictions as directions of change in species abundance but in complex systems these predictions are often ambiguous, being the result of opposite actions exerted on the same species by way of multiple pathways of interactions. Again, to avoid such ambiguities one needs to know the intensity of all links in the system. One way to make link magnitude explicit in a way that can be used in qualitative analysis is described in this paper and takes advantage of another type of ecosystem representation: ecological flow networks. These flow diagrams contain the structure, the relative position and the connections between the components of a system, and the quantity of matter flowing along every connection. In this paper it is shown how these ecological flow networks can be used to produce a quantitative model similar to the qualitative counterpart. Analyzed through the apparatus of loop analysis this quantitative model yields predictions that are by no means ambiguous, solving in an elegant way the basic problem of qualitative analysis. The approach adopted in this work is still preliminary and we must be careful in its application.

  14. A multi-model approach to X-ray pulsars

    Directory of Open Access Journals (Sweden)

    Schönherr G.

    2014-01-01

    Full Text Available The emission characteristics of X-ray pulsars are governed by magnetospheric accretion within the Alfvén radius, leading to a direct coupling of accretion column properties and interactions at the magnetosphere. The complexity of the physical processes governing the formation of radiation within the accreted, strongly magnetized plasma has led to several sophisticated theoretical modelling efforts over the last decade, dedicated to either the formation of the broad band continuum, the formation of cyclotron resonance scattering features (CRSFs or the formation of pulse profiles. While these individual approaches are powerful in themselves, they quickly reach their limits when aiming at a quantitative comparison to observational data. Too many fundamental parameters, describing the formation of the accretion columns and the systems’ overall geometry are unconstrained and different models are often based on different fundamental assumptions, while everything is intertwined in the observed, highly phase-dependent spectra and energy-dependent pulse profiles. To name just one example: the (phase variable line width of the CRSFs is highly dependent on the plasma temperature, the existence of B-field gradients (geometry and observation angle, parameters which, in turn, drive the continuum radiation and are driven by the overall two-pole geometry for the light bending model respectively. This renders a parallel assessment of all available spectral and timing information by a compatible across-models-approach indispensable. In a collaboration of theoreticians and observers, we have been working on a model unification project over the last years, bringing together theoretical calculations of the Comptonized continuum, Monte Carlo simulations and Radiation Transfer calculations of CRSFs as well as a General Relativity (GR light bending model for ray tracing of the incident emission pattern from both magnetic poles. The ultimate goal is to implement a

  15. A participatory modelling approach to developing a numerical sediment dynamics model

    Science.gov (United States)

    Jones, Nicholas; McEwen, Lindsey; Parker, Chris; Staddon, Chad

    2016-04-01

    Fluvial geomorphology is recognised as an important consideration in policy and legislation in the management of river catchments. Despite this recognition, limited knowledge exchange occurs between scientific researchers and river management practitioners. An example of this can be found within the limited uptake of numerical models of sediment dynamics by river management practitioners in the United Kingdom. The uptake of these models amongst the applied community is important as they have the potential to articulate how, at the catchment-scale, the impacts of management strategies of land-use change affect sediment dynamics and resulting channel quality. This paper describes and evaluates a new approach which involves river management stakeholders in an iterative and reflexive participatory modelling process. The aim of this approach was to create an environment for knowledge exchange between the stakeholders and the research team in the process of co-constructing a model. This process adopted a multiple case study approach, involving four groups of river catchment stakeholders in the United Kingdom. These stakeholder groups were involved in several stages of the participatory modelling process including: requirements analysis, model design, model development, and model evaluation. Stakeholders have provided input into a number of aspects of the modelling process, such as: data requirements, user interface, modelled processes, model assumptions, model applications, and model outputs. This paper will reflect on this process, in particular: the innovative methods used, data generated, and lessons learnt.

  16. An implicit approach to model plant infestation by insect pests.

    Science.gov (United States)

    Lopes, Christelle; Spataro, Thierry; Doursat, Christophe; Lapchin, Laurent; Arditi, Roger

    2007-09-07

    Various spatial approaches were developed to study the effect of spatial heterogeneities on population dynamics. We present in this paper a flux-based model to describe an aphid-parasitoid system in a closed and spatially structured environment, i.e. a greenhouse. Derived from previous work and adapted to host-parasitoid interactions, our model represents the level of plant infestation as a continuous variable corresponding to the number of plants bearing a given density of pests at a given time. The variation of this variable is described by a partial differential equation. It is coupled to an ordinary differential equation and a delay-differential equation that describe the parasitized host population and the parasitoid population, respectively. We have applied our approach to the pest Aphis gossypii and to one of its parasitoids, Lysiphlebus testaceipes, in a melon greenhouse. Numerical simulations showed that, regardless of the number and distribution of hosts in the greenhouse, the aphid population is slightly larger if parasitoids display a type III rather than a type II functional response. However, the population dynamics depend on the initial distribution of hosts and the initial density of parasitoids released, which is interesting for biological control strategies. Sensitivity analysis showed that the delay in the parasitoid equation and the growth rate of the pest population are crucial parameters for predicting the dynamics. We demonstrate here that such a flux-based approach generates relevant predictions with a more synthetic formalism than a common plant-by-plant model. We also explain how this approach can be better adapted to test different management strategies and to manage crops of several greenhouses.

  17. Comparison of Joint Modeling Approaches Including Eulerian Sliding Interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Lomov, I; Antoun, T; Vorobiev, O

    2009-12-16

    Accurate representation of discontinuities such as joints and faults is a key ingredient for high fidelity modeling of shock propagation in geologic media. The following study was done to improve treatment of discontinuities (joints) in the Eulerian hydrocode GEODYN (Lomov and Liu 2005). Lagrangian methods with conforming meshes and explicit inclusion of joints in the geologic model are well suited for such an analysis. Unfortunately, current meshing tools are unable to automatically generate adequate hexahedral meshes for large numbers of irregular polyhedra. Another concern is that joint stiffness in such explicit computations requires significantly reduced time steps, with negative implications for both the efficiency and quality of the numerical solution. An alternative approach is to use non-conforming meshes and embed joint information into regular computational elements. However, once slip displacement on the joints become comparable to the zone size, Lagrangian (even non-conforming) meshes could suffer from tangling and decreased time step problems. The use of non-conforming meshes in an Eulerian solver may alleviate these difficulties and provide a viable numerical approach for modeling the effects of faults on the dynamic response of geologic materials. We studied shock propagation in jointed/faulted media using a Lagrangian and two Eulerian approaches. To investigate the accuracy of this joint treatment the GEODYN calculations have been compared with results from the Lagrangian code GEODYN-L which uses an explicit treatment of joints via common plane contact. We explore two approaches to joint treatment in the code, one for joints with finite thickness and the other for tight joints. In all cases the sliding interfaces are tracked explicitly without homogenization or blending the joint and block response into an average response. In general, rock joints will introduce an increase in normal compliance in addition to a reduction in shear strength. In the

  18. A Quantitative Model-Driven Comparison of Command Approaches in an Adversarial Process Model

    Science.gov (United States)

    2007-06-01

    12TH ICCRTS “Adapting C2 to the 21st Century” A Quantitative Model-Driven Comparison of Command Approaches in an Adversarial Process Model Tracks...Lenahan2 identified metrics and techniques for adversarial C2 process modeling . We intend to further that work by developing a set of adversarial process ...Approaches in an Adversarial Process Model 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK

  19. Fugacity superposition: a new approach to dynamic multimedia fate modeling.

    Science.gov (United States)

    Hertwich, E G

    2001-08-01

    The fugacities, concentrations, or inventories of pollutants in environmental compartments as determined by multimedia environmental fate models of the Mackay type can be superimposed on each other. This is true for both steady-state (level III) and dynamic (level IV) models. Any problem in multimedia fate models with linear, time-invariant transfer and transformation coefficients can be solved through a superposition of a set of n independent solutions to a set of coupled, homogeneous first-order differential equations, where n is the number of compartments in the model. For initial condition problems in dynamic models, the initial inventories can be separated, e.g. by a compartment. The solution is obtained by adding the single-compartment solutions. For time-varying emissions, a convolution integral is used to superimpose solutions. The advantage of this approach is that the differential equations have to be solved only once. No numeric integration is required. Alternatively, the dynamic model can be simplified to algebraic equations using the Laplace transform. For time-varying emissions, the Laplace transform of the model equations is simply multiplied with the Laplace transform of the emission profile. It is also shown that the time-integrated inventories of the initial conditions problems are the same as the inventories in the steady-state problem. This implies that important properties of pollutants such as potential dose, persistence, and characteristic travel distance can be derived from the steady state.

  20. A novel approach to modeling spacecraft spectral reflectance

    Science.gov (United States)

    Willison, Alexander; Bédard, Donald

    2016-10-01

    Simulated spectrometric observations of unresolved resident space objects are required for the interpretation of quantities measured by optical telescopes. This allows for their characterization as part of regular space surveillance activity. A peer-reviewed spacecraft reflectance model is necessary to help improve the understanding of characterization measurements. With this objective in mind, a novel approach to model spacecraft spectral reflectance as an overall spectral bidirectional reflectance distribution function (sBRDF) is presented. A spacecraft's overall sBRDF is determined using its triangular-faceted computer-aided design (CAD) model and the empirical sBRDF of its homogeneous materials. The CAD model is used to determine the proportional contribution of each homogeneous material to the overall reflectance. Each empirical sBRDF is contained in look-up tables developed from measurements made over a range of illumination and reflection geometries using simple interpolation and extrapolation techniques. A demonstration of the spacecraft reflectance model is provided through simulation of an optical ground truth characterization using the Canadian Advanced Nanospace eXperiment-1 Engineering Model nanosatellite as the subject. Validation of the reflectance model is achieved through a qualitative comparison of simulated and measured quantities.

  1. Cancer systems biology and modeling: microscopic scale and multiscale approaches.

    Science.gov (United States)

    Masoudi-Nejad, Ali; Bidkhori, Gholamreza; Hosseini Ashtiani, Saman; Najafi, Ali; Bozorgmehr, Joseph H; Wang, Edwin

    2015-02-01

    Cancer has become known as a complex and systematic disease on macroscopic, mesoscopic and microscopic scales. Systems biology employs state-of-the-art computational theories and high-throughput experimental data to model and simulate complex biological procedures such as cancer, which involves genetic and epigenetic, in addition to intracellular and extracellular complex interaction networks. In this paper, different systems biology modeling techniques such as systems of differential equations, stochastic methods, Boolean networks, Petri nets, cellular automata methods and agent-based systems are concisely discussed. We have compared the mentioned formalisms and tried to address the span of applicability they can bear on emerging cancer modeling and simulation approaches. Different scales of cancer modeling, namely, microscopic, mesoscopic and macroscopic scales are explained followed by an illustration of angiogenesis in microscopic scale of the cancer modeling. Then, the modeling of cancer cell proliferation and survival are examined on a microscopic scale and the modeling of multiscale tumor growth is explained along with its advantages.

  2. Modeling the crop transpiration using an optimality-based approach

    Institute of Scientific and Technical Information of China (English)

    Stanislaus; J.Schymanski; Murugesu; Sivapalan

    2008-01-01

    Evapotranspiration constitutes more than 80% of the long-term water balance in Northern China.In this area,crop transpiration due to large areas of agriculture and irrigation is responsible for the majority of evapotranspiration.A model for crop transpiration is therefore essential for estimating the agricultural water consumption and understanding its feedback to the environment.However,most existing hydrological models usually calculate transpiration by relying on parameter calibration against local observations,and do not take into account crop feedback to the ambient environment.This study presents an optimality-based ecohydrology model that couples an ecological hypothesis,the photosynthetic process,stomatal movement,water balance,root water uptake and crop senescence,with the aim of predicting crop characteristics,CO2 assimilation and water balance based only on given meteorological data.Field experiments were conducted in the Weishan Irrigation District of Northern China to evaluate performance of the model.Agreement between simulation and measurement was achieved for CO2 assimilation,evapotranspiration and soil moisture content.The vegetation optimality was proven valid for crops and the model was applicable for both C3 and C4 plants.Due to the simple scheme of the optimality-based approach as well as its capability for modeling dynamic interactions between crops and the water cycle without prior vegetation information,this methodology is potentially useful to couple with the distributed hydrological model for application at the watershed scale.

  3. A DYNAMICAL SYSTEM APPROACH IN MODELING TECHNOLOGY TRANSFER

    Directory of Open Access Journals (Sweden)

    Hennie Husniah

    2016-05-01

    Full Text Available In this paper we discuss a mathematical model of two parties technology transfer from a leader to a follower. The model is reconstructed via dynamical system approach from a known standard Raz and Assa model and we found some important conclusion which have not been discussed in the original model. The model assumes that in the absence of technology transfer from a leader to a follower, both the leader and the follower have a capability to grow independently with a known upper limit of the development. We obtain a rich mathematical structure of the steady state solution of the model. We discuss a special situation in which the upper limit of the technological development of the follower is higher than that of the leader, but the leader has started earlier than the follower in implementing the technology. In this case we show a paradox stating that the follower is unable to reach its original upper limit of the technological development could appear whenever the transfer rate is sufficiently high.  We propose a new model to increase realism so that any technological transfer rate could only has a positive effect in accelerating the rate of growth of the follower in reaching its original upper limit of the development.

  4. Inverse modeling approach to allogenic karst system characterization.

    Science.gov (United States)

    Dörfliger, N; Fleury, P; Ladouche, B

    2009-01-01

    Allogenic karst systems function in a particular way that is influenced by the type of water infiltrating through river water losses, by karstification processes, and by water quality. Management of this system requires a good knowledge of its structure and functioning, for which a new methodology based on an inverse modeling approach appears to be well suited. This approach requires both spring and river inflow discharge measurements and a continuous record of chemical parameters in the river and at the spring. The inverse model calculates unit hydrographs and the impulse responses of fluxes from rainfall hydraulic head at the spring or rainfall flux data, the purpose of which is hydrograph separation. Hydrograph reconstruction is done using rainfall and river inflow data as model input and enables definition at each time step of the ratio of each component. Using chemical data, representing event and pre-event water, as input, it is possible to determine the origin of spring water (either fast flow through the epikarstic zone or slow flow through the saturated zone). This study made it possible to improve a conceptual model of allogenic karst system functioning. The methodology is used to study the Bas-Agly and the Cent Font karst systems, two allogenic karst systems in Southern France.

  5. A secured e-tendering modeling using misuse case approach

    Science.gov (United States)

    Mohd, Haslina; Robie, Muhammad Afdhal Muhammad; Baharom, Fauziah; Darus, Norida Muhd; Saip, Mohamed Ali; Yasin, Azman

    2016-08-01

    Major risk factors relating to electronic transactions may lead to destructive impacts on trust and transparency in the process of tendering. Currently, electronic tendering (e-tendering) systems still remain uncertain in issues relating to legal and security compliance and most importantly it has an unclear security framework. Particularly, the available systems are lacking in addressing integrity, confidentiality, authentication, and non-repudiation in e-tendering requirements. Thus, one of the challenges in developing an e-tendering system is to ensure the system requirements include the function for secured and trusted environment. Therefore, this paper aims to model a secured e-tendering system using misuse case approach. The modeling process begins with identifying the e-tendering process, which is based on the Australian Standard Code of Tendering (AS 4120-1994). It is followed by identifying security threats and their countermeasure. Then, the e-tendering was modelled using misuse case approach. The model can contribute to e-tendering developers and also to other researchers or experts in the e-tendering domain.

  6. Multiple comparisons in genetic association studies: a hierarchical modeling approach.

    Science.gov (United States)

    Yi, Nengjun; Xu, Shizhong; Lou, Xiang-Yang; Mallick, Himel

    2014-02-01

    Multiple comparisons or multiple testing has been viewed as a thorny issue in genetic association studies aiming to detect disease-associated genetic variants from a large number of genotyped variants. We alleviate the problem of multiple comparisons by proposing a hierarchical modeling approach that is fundamentally different from the existing methods. The proposed hierarchical models simultaneously fit as many variables as possible and shrink unimportant effects towards zero. Thus, the hierarchical models yield more efficient estimates of parameters than the traditional methods that analyze genetic variants separately, and also coherently address the multiple comparisons problem due to largely reducing the effective number of genetic effects and the number of statistically "significant" effects. We develop a method for computing the effective number of genetic effects in hierarchical generalized linear models, and propose a new adjustment for multiple comparisons, the hierarchical Bonferroni correction, based on the effective number of genetic effects. Our approach not only increases the power to detect disease-associated variants but also controls the Type I error. We illustrate and evaluate our method with real and simulated data sets from genetic association studies. The method has been implemented in our freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/).

  7. Social Network Analysis and Nutritional Behavior: An Integrated Modeling Approach.

    Science.gov (United States)

    Senior, Alistair M; Lihoreau, Mathieu; Buhl, Jerome; Raubenheimer, David; Simpson, Stephen J

    2016-01-01

    Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent research combining state-space models of nutritional geometry with agent-based models (ABMs), show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit ABMs that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition). Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interactions in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.

  8. Cavity approach for modeling and fitting polymer stretching

    CERN Document Server

    Massucci, Francesco Alessandro; Vicente, Conrad J Pérez

    2014-01-01

    The mechanical properties of molecules are today captured by single molecule manipulation experiments, so that polymer features are tested at a nanometric scale. Yet devising mathematical models to get further insight beyond the commonly studied force--elongation relation is typically hard. Here we draw from techniques developed in the context of disordered systems to solve models for single and double--stranded DNA stretching in the limit of a long polymeric chain. Since we directly derive the marginals for the molecule local orientation, our approach allows us to readily calculate the experimental elongation as well as other observables at wish. As an example, we evaluate the correlation length as a function of the stretching force. Furthermore, we are able to fit successfully our solution to real experimental data. Although the model is admittedly phenomenological, our findings are very sound. For single--stranded DNA our solution yields the correct (monomer) scale and, yet more importantly, the right pers...

  9. Autonomous Cleaning of Corrupted Scanned Documents - A Generative Modeling Approach

    CERN Document Server

    Dai, Zhenwen

    2012-01-01

    We study the task of cleaning scanned text documents that are strongly corrupted by dirt such as manual line strokes, spilled ink etc. We aim at autonomously removing dirt from a single letter-size page based only on the information the page contains. Our approach, therefore, has to learn character representations without supervision and requires a mechanism to distinguish learned representations from irregular patterns. To learn character representations, we use a probabilistic generative model parameterizing pattern features, feature variances, the features' planar arrangements, and pattern frequencies. The latent variables of the model describe pattern class, pattern position, and the presence or absence of individual pattern features. The model parameters are optimized using a novel variational EM approximation. After learning, the parameters represent, independent of their absolute position, planar feature arrangements and their variances. A quality measure defined based on the learned representation the...

  10. Oscillation threshold of a clarinet model: a numerical continuation approach

    CERN Document Server

    Karkar, Sami; Cochelin, Bruno; 10.1121/1.3651231

    2012-01-01

    This paper focuses on the oscillation threshold of single reed instruments. Several characteristics such as blowing pressure at threshold, regime selection, and playing frequency are known to change radically when taking into account the reed dynamics and the flow induced by the reed motion. Previous works have shown interesting tendencies, using analytical expressions with simplified models. In the present study, a more elaborated physical model is considered. The influence of several parameters, depending on the reed properties, the design of the instrument or the control operated by the player, are studied. Previous results on the influence of the reed resonance frequency are confirmed. New results concerning the simultaneous influence of two model parameters on oscillation threshold, regime selection and playing frequency are presented and discussed. The authors use a numerical continuation approach. Numerical continuation consists in following a given solution of a set of equations when a parameter varie...

  11. Forecasting wind-driven wildfires using an inverse modelling approach

    Directory of Open Access Journals (Sweden)

    O. Rios

    2013-12-01

    Full Text Available A technology able to rapidly forecast wildlfire dynamics would lead to a paradigm shift in the response to emergencies, providing the Fire Service with essential information about the on-going fire. The article at hand presents and explores a novel methodology to forecast wildfire dynamics in wind-driven conditions, using real time data assimilation and inverse modelling. The forecasting algorithm combines Rothermel's rate of spread theory with a perimeter expansion model based on Huygens principle and solves the optimisation problem with a tangent linear approach and a forward automatic differentiation. Its potential is investigated using synthetic data and evaluated in different wildfire scenarios. The results show the high capacity of the method to quickly predict the location of the fire front with a positive lead time (ahead of the event. This work opens the door to further advances framework and more sophisticated models while keeping the computational time suitable for operativeness.

  12. Lightning Modelling: From 3D to Circuit Approach

    Science.gov (United States)

    Moussa, H.; Abdi, M.; Issac, F.; Prost, D.

    2012-05-01

    The topic of this study is electromagnetic environment and electromagnetic interferences (EMI) effects, specifically the modelling of lightning indirect effects [1] on aircraft electrical systems present on deported and highly exposed equipments, such as nose landing gear (NLG) and nacelle, through a circuit approach. The main goal of the presented work, funded by a French national project: PREFACE, is to propose a simple equivalent electrical circuit to represent a geometrical structure, taking into account mutual, self inductances, and resistances, which play a fundamental role in the lightning current distribution. Then this model is intended to be coupled to a functional one, describing a power train chain composed of: a converter, a shielded power harness and a motor or a set of resistors used as a load for the converter. The novelty here, is to provide a pre-sizing qualitative approach allowing playing on integration in pre-design phases. This tool intends to offer a user-friendly way for replying rapidly to calls for tender, taking into account the lightning constraints. Two cases are analysed: first, a NLG that is composed of tubular pieces that can be easily approximated by equivalent cylindrical straight conductors. Therefore, passive R, L, M elements of the structure can be extracted through analytical engineer formulas such as those implemented in the partial element equivalent circuit (PEEC) [2] technique. Second, the same approach is intended to be applied on an electrical de-icing nacelle sub-system.

  13. A Random Matrix Approach for Quantifying Model-Form Uncertainties in Turbulence Modeling

    CERN Document Server

    Xiao, Heng; Ghanem, Roger G

    2016-01-01

    With the ever-increasing use of Reynolds-Averaged Navier--Stokes (RANS) simulations in mission-critical applications, the quantification of model-form uncertainty in RANS models has attracted attention in the turbulence modeling community. Recently, a physics-based, nonparametric approach for quantifying model-form uncertainty in RANS simulations has been proposed, where Reynolds stresses are projected to physically meaningful dimensions and perturbations are introduced only in the physically realizable limits. However, a challenge associated with this approach is to assess the amount of information introduced in the prior distribution and to avoid imposing unwarranted constraints. In this work we propose a random matrix approach for quantifying model-form uncertainties in RANS simulations with the realizability of the Reynolds stress guaranteed. Furthermore, the maximum entropy principle is used to identify the probability distribution that satisfies the constraints from available information but without int...

  14. Value Delivery Architecture Modeling – A New Approach for Business Modeling

    Directory of Open Access Journals (Sweden)

    Joachim Metzger

    2015-08-01

    Full Text Available Complexity and uncertainty have evolved as important challenges for entrepreneurship in many industries. Value Delivery Architecture Modeling (VDAM is a proposal for a new approach for business modeling to conquer these challenges. In addition to the creation of transparency and clarity, our approach supports the operationalization of business model ideas. VDAM is based on the combination of a new business modeling language called VDML, ontology building, and the implementation of a level of cross-company abstraction. The application of our new approach in the area of electric mobility in Germany, an industry sector with high levels of uncertainty and a lack of common understanding, shows several promising results: VDAM enables the development of an unambiguous and unbiased view on value creation. Additionally it allows for several applications leading to a more informed decision towards the implementation of new business models.

  15. Bayesian network approach for modeling local failure in lung cancer

    Science.gov (United States)

    Oh, Jung Hun; Craft, Jeffrey; Al-Lozi, Rawan; Vaidya, Manushka; Meng, Yifan; Deasy, Joseph O; Bradley, Jeffrey D; Naqa, Issam El

    2011-01-01

    Locally advanced non-small cell lung cancer (NSCLC) patients suffer from a high local failure rate following radiotherapy. Despite many efforts to develop new dose-volume models for early detection of tumor local failure, there was no reported significant improvement in their application prospectively. Based on recent studies of biomarker proteins’ role in hypoxia and inflammation in predicting tumor response to radiotherapy, we hypothesize that combining physical and biological factors with a suitable framework could improve the overall prediction. To test this hypothesis, we propose a graphical Bayesian network framework for predicting local failure in lung cancer. The proposed approach was tested using two different datasets of locally advanced NSCLC patients treated with radiotherapy. The first dataset was collected retrospectively, which is comprised of clinical and dosimetric variables only. The second dataset was collected prospectively in which in addition to clinical and dosimetric information, blood was drawn from the patients at various time points to extract candidate biomarkers as well. Our preliminary results show that the proposed method can be used as an efficient method to develop predictive models of local failure in these patients and to interpret relationships among the different variables in the models. We also demonstrate the potential use of heterogenous physical and biological variables to improve the model prediction. With the first dataset, we achieved better performance compared with competing Bayesian-based classifiers. With the second dataset, the combined model had a slightly higher performance compared to individual physical and biological models, with the biological variables making the largest contribution. Our preliminary results highlight the potential of the proposed integrated approach for predicting post-radiotherapy local failure in NSCLC patients. PMID:21335651

  16. Effective Model Approach to the Dense State of QCD Matter

    CERN Document Server

    Fukushima, Kenji

    2010-01-01

    The first-principle approach to the dense state of QCD matter, i.e. the lattice-QCD simulation at finite baryon density, is not under theoretical control for the moment. The effective model study based on QCD symmetries is a practical alternative. However the model parameters that are fixed by hadronic properties in the vacuum may have unknown dependence on the baryon chemical potential. We propose a new prescription to constrain the effective model parameters by the matching condition with the thermal Statistical Model. In the transitional region where thermal quantities blow up in the Statistical Model, deconfined quarks and gluons should smoothly take over the relevant degrees of freedom from hadrons and resonances. We use the Polyakov-loop coupled Nambu--Jona-Lasinio (PNJL) model as an effective description in the quark side and show how the matching condition is satisfied by a simple ansatz on the Polyakov loop potential. Our results favor a phase diagram with the chiral phase transition located at sligh...

  17. Exploring a type-theoretic approach to accessibility constraint modelling

    CERN Document Server

    Pogodalla, Sylvain

    2008-01-01

    The type-theoretic modelling of DRT that [degroote06] proposed features continuations for the management of the context in which a clause has to be interpreted. This approach, while keeping the standard definitions of quantifier scope, translates the rules of the accessibility constraints of discourse referents inside the semantic recipes. In this paper, we deal with additional rules for these accessibility constraints. In particular in the case of discourse referents introduced by proper nouns, that negation does not block, and in the case of rhetorical relations that structure discourses. We show how this continuation-based approach applies to those accessibility constraints and how we can consider the parallel management of various principles.

  18. Multiscale approach to modeling intrinsic dissipation in solids

    Science.gov (United States)

    Kunal, K.; Aluru, N. R.

    2016-08-01

    In this paper, we develop a multiscale approach to model intrinsic dissipation under high frequency of vibrations in solids. For vibrations with a timescale comparable to the phonon relaxation time, the local phonon distribution deviates from the equilibrium distribution. We extend the quasiharmonic (QHM) method to describe the dynamics under such a condition. The local deviation from the equilibrium state is characterized using a nonequilibrium stress tensor. A constitutive relation for the time evolution of the stress component is obtained. We then parametrize the evolution equation using the QHM method and a stochastic sampling approach. The stress relaxation dynamics is obtained using mode Langevin dynamics. Methods to obtain the input variables for the Langevin dynamics are discussed. The proposed methodology is used to obtain the dissipation rate Edissip for different cases. Frequency and size effect on Edissip are studied. The results are compared with those obtained using nonequilibrium molecular dynamics (MD).

  19. Model predictive control approach for a CPAP-device

    Directory of Open Access Journals (Sweden)

    Scheel Mathias

    2017-09-01

    Full Text Available The obstructive sleep apnoea syndrome (OSAS is characterized by a collapse of the upper respiratory tract, resulting in a reduction of the blood oxygen- and an increase of the carbon dioxide (CO2 - concentration, which causes repeated sleep disruptions. The gold standard to treat the OSAS is the continuous positive airway pressure (CPAP therapy. The continuous pressure keeps the upper airway open and prevents the collapse of the upper respiratory tract and the pharynx. Most of the available CPAP-devices cannot maintain the pressure reference [1]. In this work a model predictive control approach is provided. This control approach has the possibility to include the patient’s breathing effort into the calculation of the control variable. Therefore a patient-individualized control strategy can be developed.

  20. Anomalous superconductivity in the tJ model; moment approach

    DEFF Research Database (Denmark)

    Sørensen, Mads Peter; Rodriguez-Nunez, J.J.

    1997-01-01

    By extending the moment approach of Nolting (Z, Phys, 225 (1972) 25) in the superconducting phase, we have constructed the one-particle spectral functions (diagonal and off-diagonal) for the tJ model in any dimensions. We propose that both the diagonal and the off-diagonal spectral functions...... Hartree shift which in the end result enlarges the bandwidth of the free carriers allowing us to take relative high values of J/t and allowing superconductivity to live in the T-c-rho phase diagram, in agreement with numerical calculations in a cluster, We have calculated the static spin susceptibility......, chi(T), and the specific heat, C-v(T), within the moment approach. We find that all the relevant physical quantities show the signature of superconductivity at T-c in the form of kinks (anomalous behavior) or jumps, for low density, in agreement with recent published literature, showing a generic...

  1. Parameter identification and global sensitivity analysis of Xinanjiang model using meta-modeling approach

    Directory of Open Access Journals (Sweden)

    Xiao-meng SONG

    2013-01-01

    Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters’ sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.

  2. Spintronic device modeling and evaluation using modular approach to spintronics

    Science.gov (United States)

    Ganguly, Samiran

    Spintronics technology finds itself in an exciting stage today. Riding on the backs of rapid growth and impressive advances in materials and phenomena, it has started to make headway in the memory industry as solid state magnetic memories (STT-MRAM) and is considered a possible candidate to replace the CMOS when its scaling reaches physical limits. It is necessary to bring all these advances together in a coherent fashion to explore and evaluate the potential of spintronic devices. This work creates a framework for this exploration and evaluation based on Modular Approach to Spintronics, which encapsulate the physics of transport of charge and spin through materials and the phenomenology of magnetic dynamics and interaction in benchmarked elemental modules. These modules can then be combined together to form spin-circuit models of complex spintronic devices and structures which can be simulated using SPICE like circuit simulators. In this work we demonstrate how Modular Approach to Spintronics can be used to build spin-circuit models of functional spintronic devices of all types: memory, logic, and oscillators. We then show how Modular Approach to Spintronics can help identify critical factors behind static and dynamic dissipation in spintronic devices and provide remedies by exploring the use of various alternative materials and phenomena. Lastly, we show the use of Modular Approach to Spintronics in exploring new paradigms of computing enabled by the inherent physics of spintronic devices. We hope that this work will encourage more research and experiments that will establish spintronics as a viable technology for continued advancement of electronics.

  3. The two capacitor problem revisited: simple harmonic oscillator model approach

    CERN Document Server

    Lee, Keeyung

    2012-01-01

    The well-known two-capacitor problem, in which exactly half the stored energy disappears when a charged capacitor is connected to an identical capacitor is discussed based on the mechanical harmonic oscillator model approach. In the mechanical harmonic oscillator model, it is shown first that \\emph {exactly half} the work done by a constant applied force is dissipated irrespective of the form of dissipation mechanism when the system comes to a new equilibrium after a constant force is abruptly applied. This model is then applied to the energy loss mechanism in the capacitor charging problem or the two-capacitor problem. This approach allows a simple explanation of the energy dissipation mechanism in these problems and shows that the dissipated energy should always be \\emph {exactly half} the supplied energy whether that is caused by the Joule heat or by the radiation. This paper which provides a simple treatment of the energy dissipation mechanism in the two-capacitor problem is suitable for all undergraduate...

  4. A model-based approach to selection of tag SNPs

    Directory of Open Access Journals (Sweden)

    Sun Fengzhu

    2006-06-01

    Full Text Available Abstract Background Single Nucleotide Polymorphisms (SNPs are the most common type of polymorphisms found in the human genome. Effective genetic association studies require the identification of sets of tag SNPs that capture as much haplotype information as possible. Tag SNP selection is analogous to the problem of data compression in information theory. According to Shannon's framework, the optimal tag set maximizes the entropy of the tag SNPs subject to constraints on the number of SNPs. This approach requires an appropriate probabilistic model. Compared to simple measures of Linkage Disequilibrium (LD, a good model of haplotype sequences can more accurately account for LD structure. It also provides a machinery for the prediction of tagged SNPs and thereby to assess the performances of tag sets through their ability to predict larger SNP sets. Results Here, we compute the description code-lengths of SNP data for an array of models and we develop tag SNP selection methods based on these models and the strategy of entropy maximization. Using data sets from the HapMap and ENCODE projects, we show that the hidden Markov model introduced by Li and Stephens outperforms the other models in several aspects: description code-length of SNP data, information content of tag sets, and prediction of tagged SNPs. This is the first use of this model in the context of tag SNP selection. Conclusion Our study provides strong evidence that the tag sets selected by our best method, based on Li and Stephens model, outperform those chosen by several existing methods. The results also suggest that information content evaluated with a good model is more sensitive for assessing the quality of a tagging set than the correct prediction rate of tagged SNPs. Besides, we show that haplotype phase uncertainty has an almost negligible impact on the ability of good tag sets to predict tagged SNPs. This justifies the selection of tag SNPs on the basis of haplotype

  5. A modular approach to addressing model design, scale, and parameter estimation issues in distributed hydrological modelling

    Science.gov (United States)

    Leavesley, G.H.; Markstrom, S.L.; Restrepo, Pedro J.; Viger, R.J.

    2002-01-01

    A modular approach to model design and construction provides a flexible framework in which to focus the multidisciplinary research and operational efforts needed to facilitate the development, selection, and application of the most robust distributed modelling methods. A variety of modular approaches have been developed, but with little consideration for compatibility among systems and concepts. Several systems are proprietary, limiting any user interaction. The US Geological Survey modular modelling system (MMS) is a modular modelling framework that uses an open source software approach to enable all members of the scientific community to address collaboratively the many complex issues associated with the design, development, and application of distributed hydrological and environmental models. Implementation of a common modular concept is not a trivial task. However, it brings the resources of a larger community to bear on the problems of distributed modelling, provides a framework in which to compare alternative modelling approaches objectively, and provides a means of sharing the latest modelling advances. The concepts and components of the MMS are described and an example application of the MMS, in a decision-support system context, is presented to demonstrate current system capabilities. Copyright ?? 2002 John Wiley and Sons, Ltd.

  6. CM5: A pre-Swarm magnetic field model based upon the comprehensive modeling approach

    DEFF Research Database (Denmark)

    Sabaka, T.; Olsen, Nils; Tyler, Robert

    2014-01-01

    We have developed a model based upon the very successful Comprehensive Modeling (CM) approach using recent CHAMP, Ørsted, SAC-C and observatory hourly-means data from September 2000 to the end of 2013. This CM, called CM5, was derived from the algorithm that will provide a consistent line of Level...

  7. Interdependence: a new model for the global approach to disability

    Directory of Open Access Journals (Sweden)

    Nathan Grills

    2015-01-01

    Full Text Available Disability affects over 1 billion people and the WHO estimates that over 80% of individuals with disability live in low and middle income countries, where access to health and social services to respond to disability are limited 1. Compounding this poverty is that medical and technological approaches to disability, however needed, are usually very expensive. Yet, much can be done at low cost to increase the wellbeing of people with disability, and the church and Christians need to take a lead. The WHO’s definition of disability highlights the challenge to us in global health. It has been defined by the WHO as “the interaction between a person’s impairments and the attitudinal and environmental barriers that hinder their full and effective participation in society on an equal basis with others” 2. This understanding of disability requires us to go beyond mere healing and towards inclusion in our response to chronic diseases and disability. This is known as the social model and requires societal attitudinal change and modification of disabling environments in order to facilitate those with disability to be included in our community and churches. These are good responses but the church needs to consider alternative models to those that are currently promoted which strive for independence as the ultimate endpoint. In this paper I introduce some disability-related articles in this issue and outline an approach that goes beyond the Social Model towards an Interdependence Model which I think is a more Biblical model of disability and one which we Christians and churches in global health should consider. This model would go beyond changing society to accommodate for people with disabilities towards acknowledging they play an important part in our community and indeed in our church. We need those people with disability to contribute, love and bless those with and without disabilities. And of course those with disability need the love, care and

  8. The CONRAD approach to biokinetic modeling of DTPA decorporation therapy.

    Science.gov (United States)

    Breustedt, Bastian; Blanchardon, Eric; Bérard, Philippe; Fritsch, Paul; Giussani, Augusto; Lopez, Maria Antonia; Luciani, Andrea; Nosske, Dietmar; Piechowski, Jean; Schimmelpfeng, Jutta; Sérandour, Anne-Laure

    2010-10-01

    Diethylene Triamine Pentaacetic Acid (DTPA) is used for decorporation of plutonium because it is known to be able to enhance its urinary excretion for several days after treatment by forming stable Pu-DTPA complexes. The decorporation prevents accumulation in organs and results in a dosimetric benefit, which is difficult to quantify from bioassay data using existing models. The development of a biokinetic model describing the mechanisms of actinide decorporation by administration of DTPA was initiated as a task in the European COordinated Network on RAdiation Dosimetry (CONRAD). The systemic biokinetic model from Leggett et al. and the biokinetic model for DTPA compounds of International Commission on Radiological Protection Publication 53 were the starting points. A new model for biokinetics of administered DTPA based on physiological interpretation of 14C-labeled DTPA studies from literature was proposed by the group. Plutonium and DTPA biokinetics were modeled separately. The systems were connected by means of a second order kinetics process describing the chelation process of plutonium atoms and DTPA molecules to Pu-DTPA complexes. It was assumed that chelation only occurs in the blood and in systemic compartment ST0 (representing rapid turnover soft tissues), and that Pu-DTPA complexes and administered forms of DTPA share the same biokinetic behavior. First applications of the CONRAD approach showed that the enhancement of plutonium urinary excretion after administration of DTPA was strongly influenced by the chelation rate constant. Setting it to a high value resulted in a good fit to the observed data. However, the model was not yet satisfactory since the effects of repeated DTPA administration in a short time period cannot be predicted in a realistic way. In order to introduce more physiological knowledge into the model several questions still have to be answered. Further detailed studies of human contamination cases and experimental data will be needed in

  9. New Approaches in Reusable Booster System Life Cycle Cost Modeling

    Science.gov (United States)

    Zapata, Edgar

    2013-01-01

    This paper presents the results of a 2012 life cycle cost (LCC) study of hybrid Reusable Booster Systems (RBS) conducted by NASA Kennedy Space Center (KSC) and the Air Force Research Laboratory (AFRL). The work included the creation of a new cost estimating model and an LCC analysis, building on past work where applicable, but emphasizing the integration of new approaches in life cycle cost estimation. Specifically, the inclusion of industry processes/practices and indirect costs were a new and significant part of the analysis. The focus of LCC estimation has traditionally been from the perspective of technology, design characteristics, and related factors such as reliability. Technology has informed the cost related support to decision makers interested in risk and budget insight. This traditional emphasis on technology occurs even though it is well established that complex aerospace systems costs are mostly about indirect costs, with likely only partial influence in these indirect costs being due to the more visible technology products. Organizational considerations, processes/practices, and indirect costs are traditionally derived ("wrapped") only by relationship to tangible product characteristics. This traditional approach works well as long as it is understood that no significant changes, and by relation no significant improvements, are being pursued in the area of either the government acquisition or industry?s indirect costs. In this sense then, most launch systems cost models ignore most costs. The alternative was implemented in this LCC study, whereby the approach considered technology and process/practices in balance, with as much detail for one as the other. This RBS LCC study has avoided point-designs, for now, instead emphasizing exploring the trade-space of potential technology advances joined with potential process/practice advances. Given the range of decisions, and all their combinations, it was necessary to create a model of the original model

  10. New Approaches in Reuseable Booster System Life Cycle Cost Modeling

    Science.gov (United States)

    Zapata, Edgar

    2013-01-01

    This paper presents the results of a 2012 life cycle cost (LCC) study of hybrid Reusable Booster Systems (RBS) conducted by NASA Kennedy Space Center (KSC) and the Air Force Research Laboratory (AFRL). The work included the creation of a new cost estimating model and an LCC analysis, building on past work where applicable, but emphasizing the integration of new approaches in life cycle cost estimation. Specifically, the inclusion of industry processes/practices and indirect costs were a new and significant part of the analysis. The focus of LCC estimation has traditionally been from the perspective of technology, design characteristics, and related factors such as reliability. Technology has informed the cost related support to decision makers interested in risk and budget insight. This traditional emphasis on technology occurs even though it is well established that complex aerospace systems costs are mostly about indirect costs, with likely only partial influence in these indirect costs being due to the more visible technology products. Organizational considerations, processes/practices, and indirect costs are traditionally derived ("wrapped") only by relationship to tangible product characteristics. This traditional approach works well as long as it is understood that no significant changes, and by relation no significant improvements, are being pursued in the area of either the government acquisition or industry?s indirect costs. In this sense then, most launch systems cost models ignore most costs. The alternative was implemented in this LCC study, whereby the approach considered technology and process/practices in balance, with as much detail for one as the other. This RBS LCC study has avoided point-designs, for now, instead emphasizing exploring the trade-space of potential technology advances joined with potential process/practice advances. Given the range of decisions, and all their combinations, it was necessary to create a model of the original model

  11. Approaches to Computer Modeling of Phosphate Hide-Out.

    Science.gov (United States)

    1984-06-28

    phosphate acts as a buffer to keep pH at a value above which acid corrosion occurs . and below which caustic corrosion becomes significant. Difficulties are...ionization of dihydrogen phosphate : HIPO - + + 1PO, K (B-7) H+ + - £Iao 1/1, (B-8) H , PO4 - + O- - H0 4 + H20 K/Kw (0-9) 19 * Such zero heat...OF STANDARDS-1963-A +. .0 0 0 9t~ - 4 NRL Memorandum Report 5361 4 Approaches to Computer Modeling of Phosphate Hide-Out K. A. S. HARDY AND J. C

  12. A motivic approach to phase transitions in Potts models

    Science.gov (United States)

    Aluffi, Paolo; Marcolli, Matilde

    2013-01-01

    We describe an approach to the study of phase transitions in Potts models based on an estimate of the complexity of the locus of real zeros of the partition function, computed in terms of the classes in the Grothendieck ring of the affine algebraic varieties defined by the vanishing of the multivariate Tutte polynomial. We give completely explicit calculations for the examples of the chains of linked polygons and of the graphs obtained by replacing the polygons with their dual graphs. These are based on a deletion-contraction formula for the Grothendieck classes and on generating functions for splitting and doubling edges.

  13. Quiver Approach to Massive Gauge Bosons Beyond the Standard Model

    CERN Document Server

    Frampton, Paul Howard

    2013-01-01

    We address the question of the possible existence of massive gauge bosons beyond the $W^{\\pm}$ and $Z^{0}$ of the standard model. Our intuitive and aesthetic approach is based on quiver theory. Examples thereof arise, for example, from compactification of the type IIB superstring on $AdS_5 \\times S_5/ Z_n$ orbifolds. We explore the quiver theory framework more generally than string theory. The practical question is what gauge bosons to look for at the upgraded LHC, in terms of color and electric charge, and of their couplings to quarks and leptons. Axigluons and bileptons are favored.

  14. Data mining approach to model the diagnostic service management.

    Science.gov (United States)

    Lee, Sun-Mi; Lee, Ae-Kyung; Park, Il-Su

    2006-01-01

    Korea has National Health Insurance Program operated by the government-owned National Health Insurance Corporation, and diagnostic services are provided every two year for the insured and their family members. Developing a customer relationship management (CRM) system using data mining technology would be useful to improve the performance of diagnostic service programs. Under these circumstances, this study developed a model for diagnostic service management taking into account the characteristics of subjects using a data mining approach. This study could be further used to develop an automated CRM system contributing to the increase in the rate of receiving diagnostic services.

  15. Conceptual modelling approach of mechanical products based on functional surface

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A modelling framework based on functional surface is presented to support conceptual design of mechanical products. The framework organizes product information in an abstract and multilevel manner. It consists of two mapping processes: function decomposition process and form reconstitution process. The steady mapping relationship from function to form (function-functional surface-form) is realized by taking functional surface as the middle layer. It farthest reduces the possibilities of combinatorial explosion that can occur during function decomposition and form reconstitution. Finally, CAD tools are developed and an auto-bender machine is applied to demonstrate the proposed approach.

  16. Design of Multithreaded Software The Entity-Life Modeling Approach

    CERN Document Server

    Sandén, Bo I

    2011-01-01

    This book assumes familiarity with threads (in a language such as Ada, C#, or Java) and introduces the entity-life modeling (ELM) design approach for certain kinds of multithreaded software. ELM focuses on "reactive systems," which continuously interact with the problem environment. These "reactive systems" include embedded systems, as well as such interactive systems as cruise controllers and automated teller machines.Part I covers two fundamentals: program-language thread support and state diagramming. These are necessary for understanding ELM and are provided primarily for reference. P

  17. Algebraic approach to small-world network models

    Science.gov (United States)

    Rudolph-Lilith, Michelle; Muller, Lyle E.

    2014-01-01

    We introduce an analytic model for directed Watts-Strogatz small-world graphs and deduce an algebraic expression of its defining adjacency matrix. The latter is then used to calculate the small-world digraph's asymmetry index and clustering coefficient in an analytically exact fashion, valid nonasymptotically for all graph sizes. The proposed approach is general and can be applied to all algebraically well-defined graph-theoretical measures, thus allowing for an analytical investigation of finite-size small-world graphs.

  18. Modelling Based Approach for Reconstructing Evidence of VOIP Malicious Attacks

    Directory of Open Access Journals (Sweden)

    Mohammed Ibrahim

    2015-05-01

    Full Text Available Voice over Internet Protocol (VoIP is a new communication technology that uses internet protocol in providing phone services. VoIP provides various forms of benefits such as low monthly fee and cheaper rate in terms of long distance and international calls. However, VoIP is accompanied with novel security threats. Criminals often take advantages of such security threats and commit illicit activities. These activities require digital forensic experts to acquire, analyses, reconstruct and provide digital evidence. Meanwhile, there are various methodologies and models proposed in detecting, analysing and providing digital evidence in VoIP forensic. However, at the time of writing this paper, there is no model formalized for the reconstruction of VoIP malicious attacks. Reconstruction of attack scenario is an important technique in exposing the unknown criminal acts. Hence, this paper will strive in addressing that gap. We propose a model for reconstructing VoIP malicious attacks. To achieve that, a formal logic approach called Secure Temporal Logic of Action(S-TLA+ was adopted in rebuilding the attack scenario. The expected result of this model is to generate additional related evidences and their consistency with the existing evidences can be determined by means of S-TLA+ model checker.

  19. MODEL-BASED PERFORMANCE EVALUATION APPROACH FOR MOBILE AGENT SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    Li Xin; Mi Zhengkun; Meng Xudong

    2004-01-01

    Claimed as the next generation programming paradigm, mobile agent technology has attracted extensive interests in recent years. However, up to now, limited research efforts have been devoted to the performance study of mobile agent system and most of these researches focus on agent behavior analysis resulting in that models are hard to apply to mobile agent systems. To bridge the gap, a new performance evaluation model derived from operation mechanisms of mobile agent platforms is proposed. Details are discussed for the design of companion simulation software, which can provide the system performance such as response time of platform to mobile agent. Further investigation is followed on the determination of model parameters. Finally comparison is made between the model-based simulation results and measurement-based real performance of mobile agent systems. The results show that the proposed model and designed software are effective in evaluating performance characteristics of mobile agent systems. The proposed approach can also be considered as the basis of performance analysis for large systems composed of multiple mobile agent platforms.

  20. Modelling hybrid stars in quark-hadron approaches

    Energy Technology Data Exchange (ETDEWEB)

    Schramm, S. [FIAS, Frankfurt am Main (Germany); Dexheimer, V. [Kent State University, Department of Physics, Kent, OH (United States); Negreiros, R. [Federal Fluminense University, Gragoata, Niteroi (Brazil)

    2016-01-15

    The density in the core of neutron stars can reach values of about 5 to 10 times nuclear matter saturation density. It is, therefore, a natural assumption that hadrons may have dissolved into quarks under such conditions, forming a hybrid star. This star will have an outer region of hadronic matter and a core of quark matter or even a mixed state of hadrons and quarks. In order to investigate such phases, we discuss different model approaches that can be used in the study of compact stars as well as being applicable to a wider range of temperatures and densities. One major model ingredient, the role of quark interactions in the stability of massive hybrid stars is discussed. In this context, possible conflicts with lattice QCD simulations are investigated. (orig.)

  1. Biogas Production Modelling: A Control System Engineering Approach

    Science.gov (United States)

    Stollenwerk, D.; Rieke, C.; Dahmen, M.; Pieper, M.

    2016-03-01

    Due to the Renewable Energy Act, in Germany it is planned to increase the amount of renewable energy carriers up to 60%. One of the main problems is the fluctuating supply of wind and solar energy. Here biogas plants provide a solution, because a demand-driven supply is possible. Before running such a plant, it is necessary to simulate and optimize the process feeding strategy. Current simulation models are either very detailed like the ADM 1, which leads to very long optimization runtimes or not accurate enough to handle the biogas production kinetics. Therefore this paper provides a new model of a biogas plant, which is easy to parametrize but also has the needed accuracy for the output prediction. It is based on the control system approach of system identification and validated with laboratory results of a real biogas production testing facility.

  2. The shell model approach: Key to hadron structure

    Energy Technology Data Exchange (ETDEWEB)

    Lipkin, H.J. (Weizmann Inst. of Science, Rehovoth (Israel). Dept. of Nuclear Physics)

    1989-08-14

    A shell model approach leads to a simple constituent quark model for hadron structure in which mesons and baryons consist only of constituent quarks. Hadron masses are the sums of the constituent quark effective masses and a hyperfine interaction inversely proportional to the product of these same masses. Hadron masses and magnetic moments are related by the assumption that the same effective mass parameter appears in the additive mass term, the hyperfine interaction, and the quark magnetic moment, both in mesons and baryons. The analysis pinpoints the physical assumptions needed for each relation and gives two new mass relations. Application to weak decays and recent polarized EMC data confirms conclusions previously obtained that the current quark contribution to the spin structure of the proton vanishes, but without need for the questionable assumption of SU(3) symmetry relating hyperon decays and proton structure. SU(3) symmetry breaking is clarified. 24 refs.

  3. Static models, recursive estimators and the zero-variance approach

    KAUST Repository

    Rubino, Gerardo

    2016-01-07

    When evaluating dependability aspects of complex systems, most models belong to the static world, where time is not an explicit variable. These models suffer from the same problems than dynamic ones (stochastic processes), such as the frequent combinatorial explosion of the state spaces. In the Monte Carlo domain, on of the most significant difficulties is the rare event situation. In this talk, we describe this context and a recent technique that appears to be at the top performance level in the area, where we combined ideas that lead to very fast estimation procedures with another approach called zero-variance approximation. Both ideas produced a very efficient method that has the right theoretical property concerning robustness, the Bounded Relative Error one. Some examples illustrate the results.

  4. Tunneling approach and thermality in dispersive models of analogue gravity

    CERN Document Server

    Belgiorno, F; Piazza, F Dalla

    2014-01-01

    We set up a tunneling approach to the analogue Hawking effect in the case of models of analogue gravity which are affected by dispersive effects. An effective Schroedinger-like equation for the basic scattering phenomenon IN->P+N*, where IN is the incident mode, P is the positive norm reflected mode, and N* is the negative norm one, signalling particle creation, is derived, aimed to an approximate description of the phenomenon. Horizons and barrier penetration play manifestly a key-role in giving rise to pair-creation. The non-dispersive limit is also correctly recovered. Drawbacks of the model are also pointed out and a possible solution ad hoc is suggested.

  5. Ordered LOGIT Model approach for the determination of financial distress.

    Science.gov (United States)

    Kinay, B

    2010-01-01

    Nowadays, as a result of the global competition encountered, numerous companies come up against financial distresses. To predict and take proactive approaches for those problems is quite important. Thus, the prediction of crisis and financial distress is essential in terms of revealing the financial condition of companies. In this study, financial ratios relating to 156 industrial firms that are quoted in the Istanbul Stock Exchange are used and probabilities of financial distress are predicted by means of an ordered logit regression model. By means of Altman's Z Score, the dependent variable is composed by scaling the level of risk. Thus, a model that can compose an early warning system and predict financial distress is proposed.

  6. Performance optimization of Jatropha biodiesel engine model using Taguchi approach

    Energy Technology Data Exchange (ETDEWEB)

    Ganapathy, T.; Murugesan, K.; Gakkhar, R.P. [Mechanical and Industrial Engineering Department, Indian Institute of Technology Roorkee, Roorkee 247 667 (India)

    2009-11-15

    This paper proposes a methodology for thermodynamic model analysis of Jatropha biodiesel engine in combination with Taguchi's optimization approach to determine the optimum engine design and operating parameters. A thermodynamic model based on two-zone Weibe's heat release function has been employed to simulate the Jatropha biodiesel engine performance. Among the important engine design and operating parameters 10 critical parameters were selected assuming interactions between the pair of parameters. Using linear graph theory and Taguchi method an L{sub 16} orthogonal array has been utilized to determine the engine test trials layout. In order to maximize the performance of Jatropha biodiesel engine the signal to noise ratio (SNR) related to higher-the-better (HTB) quality characteristics has been used. The present methodology correctly predicted the compression ratio, Weibe's heat release constants and combustion zone duration as the critical parameters that affect the performance of the engine compared to other parameters. (author)

  7. Wind Turbine Noise Propagation Modelling: An Unsteady Approach

    Science.gov (United States)

    Barlas, E.; Zhu, W. J.; Shen, W. Z.; Andersen, S. J.

    2016-09-01

    Wind turbine sound generation and propagation phenomena are inherently time dependent, hence tools that incorporate the dynamic nature of these two issues are needed for accurate modelling. In this paper, we investigate the sound propagation from a wind turbine by considering the effects of unsteady flow around it and time dependent source characteristics. For the acoustics modelling we employ the Parabolic Equation (PE) method while Large Eddy Simulation (LES) as well as synthetically generated turbulence fields are used to generate the medium flow upon which sound propagates. Unsteady acoustic simulations are carried out for three incoming wind shear and various turbulence intensities, using a moving source approach to mimic the rotating turbine blades. The focus of the present paper is to study the near and far field amplitude modulation characteristics and time evolution of Sound Pressure Level (SPL).

  8. New modeling approach for bounding flight in birds.

    Science.gov (United States)

    Sachs, Gottfried; Lenz, Jakob

    2011-12-01

    A new modeling approach is presented which accounts for the unsteady motion features and dynamics characteristics of bounding flight. For this purpose, a realistic mathematical model is developed to describe the flight dynamics of a bird with regard to a motion which comprises flapping and bound phases involving acceleration and deceleration as well as, simultaneously, pull-up and push-down maneuvers. Furthermore, a mathematical optimization method is used for determining that bounding flight mode which yields the minimum energy expenditure per range. Thus, it can be shown to what extent bounding flight is aerodynamically superior to continuous flapping flight, yielding a reduction in the energy expenditure in the speed range practically above the maximum range speed. Moreover, the role of the body lift for the efficiency of bounding flight is identified and quantified. Introducing an appropriate non-dimensionalization of the relations describing the bird's flight dynamics, results of generally valid nature are derived for the addressed items.

  9. Kinetics approach to modeling of polymer additive degradation in lubricants

    Institute of Scientific and Technical Information of China (English)

    llyaI.KUDISH; RubenG.AIRAPETYAN; Michael; J.; COVITCH

    2001-01-01

    A kinetics problem for a degrading polymer additive dissolved in a base stock is studied.The polymer degradation may be caused by the combination of such lubricant flow parameters aspressure, elongational strain rate, and temperature as well as lubricant viscosity and the polymercharacteristics (dissociation energy, bead radius, bond length, etc.). A fundamental approach tothe problem of modeling mechanically induced polymer degradation is proposed. The polymerdegradation is modeled on the basis of a kinetic equation for the density of the statistical distribu-tion of polymer molecules as a function of their molecular weight. The integrodifferential kineticequation for polymer degradation is solved numerically. The effects of pressure, elongational strainrate, temperature, and lubricant viscosity on the process of lubricant degradation are considered.The increase of pressure promotes fast degradation while the increase of temperature delaysdegradation. A comparison of a numerically calculated molecular weight distribution with an ex-perimental one obtained in bench tests showed that they are in excellent agreement with eachother.

  10. A Mixed Approach for Modeling Blood Flow in Brain Microcirculation

    Science.gov (United States)

    Peyrounette, M.; Sylvie, L.; Davit, Y.; Quintard, M.

    2014-12-01

    We have previously demonstrated [1] that the vascular system of the healthy human brain cortex is a superposition of two structural components, each corresponding to a different spatial scale. At small-scale, the vascular network has a capillary structure, which is homogeneous and space-filling over a cut-off length. At larger scale, veins and arteries conform to a quasi-fractal branched structure. This structural duality is consistent with the functional duality of the vasculature, i.e. distribution and exchange. From a modeling perspective, this can be viewed as the superposition of: (a) a continuum model describing slow transport in the small-scale capillary network, characterized by a representative elementary volume and effective properties; and (b) a discrete network approach [2] describing fast transport in the arterial and venous network, which cannot be homogenized because of its fractal nature. This problematic is analogous to modeling problems encountered in geological media, e.g, in petroleum engineering, where fast conducting channels (wells or fractures) are embedded in a porous medium (reservoir rock). An efficient method to reduce the computational cost of fractures/continuum simulations is to use relatively large grid blocks for the continuum model. However, this also makes it difficult to accurately couple both structural components. In this work, we solve this issue by adapting the "well model" concept used in petroleum engineering [3] to brain specific 3-D situations. We obtain a unique linear system of equations describing the discrete network, the continuum and the well model coupling. Results are presented for realistic geometries and compared with a non-homogenized small-scale network model of an idealized periodic capillary network of known permeability. [1] Lorthois & Cassot, J. Theor. Biol. 262, 614-633, 2010. [2] Lorthois et al., Neuroimage 54 : 1031-1042, 2011. [3] Peaceman, SPE J. 18, 183-194, 1978.

  11. A Workflow-Oriented Approach To Propagation Models In Heliophysics

    Directory of Open Access Journals (Sweden)

    Gabriele Pierantoni

    2014-01-01

    Full Text Available The Sun is responsible for the eruption of billions of tons of plasma andthe generation of near light-speed particles that propagate throughout the solarsystem and beyond. If directed towards Earth, these events can be damaging toour tecnological infrastructure. Hence there is an effort to understand the causeof the eruptive events and how they propagate from Sun to Earth. However, thephysics governing their propagation is not well understood, so there is a need todevelop a theoretical description of their propagation, known as a PropagationModel, in order to predict when they may impact Earth. It is often difficultto define a single propagation model that correctly describes the physics ofsolar eruptive events, and even more difficult to implement models capable ofcatering for all these complexities and to validate them using real observational data.In this paper, we envisage that workflows offer both a theoretical andpractical framerwork for a novel approach to propagation models. We definea mathematical framework that aims at encompassing the different modalitieswith which workflows can be used, and provide a set of generic building blockswritten in the TAVERNA workflow language that users can use to build theirown propagation models. Finally we test both the theoretical model and thecomposite building blocks of the workflow with a real Science Use Case that wasdiscussed during the 4th CDAW (Coordinated Data Analysis Workshop eventheld by the HELIO project. We show that generic workflow building blocks canbe used to construct a propagation model that succesfully describes the transitof solar eruptive events toward Earth and predict a correct Earth-impact time

  12. Drifting model approach to modeling based on weighted support vector machines

    Institute of Scientific and Technical Information of China (English)

    冯瑞; 宋春林; 邵惠鹤

    2004-01-01

    This paper proposes a novel drifting modeling (DM) method. Briefly, we first employ an improved SVMs algorithm named weighted support vector machines (W_SVMs), which is suitable for locally learning, and then the DM method using the algorithm is proposed. By applying the proposed modeling method to Fluidized Catalytic Cracking Unit (FCCU), the simulation results show that the property of this proposed approach is superior to global modeling method based on standard SVMs.

  13. A quality risk management model approach for cell therapy manufacturing.

    Science.gov (United States)

    Lopez, Fabio; Di Bartolo, Chiara; Piazza, Tommaso; Passannanti, Antonino; Gerlach, Jörg C; Gridelli, Bruno; Triolo, Fabio

    2010-12-01

    International regulatory authorities view risk management as an essential production need for the development of innovative, somatic cell-based therapies in regenerative medicine. The available risk management guidelines, however, provide little guidance on specific risk analysis approaches and procedures applicable in clinical cell therapy manufacturing. This raises a number of problems. Cell manufacturing is a poorly automated process, prone to operator-introduced variations, and affected by heterogeneity of the processed organs/tissues and lot-dependent variability of reagent (e.g., collagenase) efficiency. In this study, the principal challenges faced in a cell-based product manufacturing context (i.e., high dependence on human intervention and absence of reference standards for acceptable risk levels) are identified and addressed, and a risk management model approach applicable to manufacturing of cells for clinical use is described for the first time. The use of the heuristic and pseudo-quantitative failure mode and effect analysis/failure mode and critical effect analysis risk analysis technique associated with direct estimation of severity, occurrence, and detection is, in this specific context, as effective as, but more efficient than, the analytic hierarchy process. Moreover, a severity/occurrence matrix and Pareto analysis can be successfully adopted to identify priority failure modes on which to act to mitigate risks. The application of this approach to clinical cell therapy manufacturing in regenerative medicine is also discussed.

  14. THE FAIRSHARES MODEL: AN ETHICAL APPROACH TO SOCIAL ENTERPRISE DEVELOPMENT?

    Directory of Open Access Journals (Sweden)

    Rory James Ridley-Duff

    2015-07-01

    Full Text Available This paper is based on the keynote address to the 14th International Association of Public and Non-Profit Marketing (IAPNM conference. It explore the question "What impact do ethical values in the FairShares Model have on social entrepreneurial behaviour?" In the first part, three broad approaches to social enterprise are set out: co-operative and mutual enterprises (CMEs, social and responsible businesses (SRBs and charitable trading activities (CTAs. The ethics that guide each approach are examined to provide a conceptual framework for examining FairShares as a case study. In the second part, findings are scrutinised in terms of the ethical values and principles that are activated when FairShares is applied to practice. The paper contributes to knowledge by giving an example of the way OpenSource technology (Loomio has been used to translate 'espoused theories' into 'theories in use' to advance social enterprise development. The review of FairShares using the conceptual framework suggests there is a fourth approach based on multi-stakeholder co-operation to create 'associative democracy' in the workplace.

  15. An approach to model based testing of multiagent systems.

    Science.gov (United States)

    Ur Rehman, Shafiq; Nadeem, Aamer

    2015-01-01

    Autonomous agents perform on behalf of the user to achieve defined goals or objectives. They are situated in dynamic environment and are able to operate autonomously to achieve their goals. In a multiagent system, agents cooperate with each other to achieve a common goal. Testing of multiagent systems is a challenging task due to the autonomous and proactive behavior of agents. However, testing is required to build confidence into the working of a multiagent system. Prometheus methodology is a commonly used approach to design multiagents systems. Systematic and thorough testing of each interaction is necessary. This paper proposes a novel approach to testing of multiagent systems based on Prometheus design artifacts. In the proposed approach, different interactions between the agent and actors are considered to test the multiagent system. These interactions include percepts and actions along with messages between the agents which can be modeled in a protocol diagram. The protocol diagram is converted into a protocol graph, on which different coverage criteria are applied to generate test paths that cover interactions between the agents. A prototype tool has been developed to generate test paths from protocol graph according to the specified coverage criterion.

  16. An Approach to Model Based Testing of Multiagent Systems

    Directory of Open Access Journals (Sweden)

    Shafiq Ur Rehman

    2015-01-01

    Full Text Available Autonomous agents perform on behalf of the user to achieve defined goals or objectives. They are situated in dynamic environment and are able to operate autonomously to achieve their goals. In a multiagent system, agents cooperate with each other to achieve a common goal. Testing of multiagent systems is a challenging task due to the autonomous and proactive behavior of agents. However, testing is required to build confidence into the working of a multiagent system. Prometheus methodology is a commonly used approach to design multiagents systems. Systematic and thorough testing of each interaction is necessary. This paper proposes a novel approach to testing of multiagent systems based on Prometheus design artifacts. In the proposed approach, different interactions between the agent and actors are considered to test the multiagent system. These interactions include percepts and actions along with messages between the agents which can be modeled in a protocol diagram. The protocol diagram is converted into a protocol graph, on which different coverage criteria are applied to generate test paths that cover interactions between the agents. A prototype tool has been developed to generate test paths from protocol graph according to the specified coverage criterion.

  17. Right approach to 3D modeling using CAD tools

    Science.gov (United States)

    Baddam, Mounica Reddy

    The thesis provides a step-by-step methodology to enable an instructor dealing with CAD tools to optimally guide his/her students through an understandable 3D modeling approach which will not only enhance their knowledge about the tool's usage but also enable them to achieve their desired result in comparatively lesser time. In the known practical field, there is particularly very little information available to apply CAD skills to formal beginners' training sessions. Additionally, advent of new software in 3D domain cumulates updating into a more difficult task. Keeping up to the industry's advanced requirements emphasizes the importance of more skilled hands in the field of CAD development, rather than just prioritizing manufacturing in terms of complex software features. The thesis analyses different 3D modeling approaches specified to the varieties of CAD tools currently available in the market. Utilizing performance-time databases, learning curves have been generated to measure their performance time, feature count etc. Based on the results, improvement parameters have also been provided for (Asperl, 2005).

  18. A New Approach in Regression Analysis for Modeling Adsorption Isotherms

    Directory of Open Access Journals (Sweden)

    Dana D. Marković

    2014-01-01

    Full Text Available Numerous regression approaches to isotherm parameters estimation appear in the literature. The real insight into the proper modeling pattern can be achieved only by testing methods on a very big number of cases. Experimentally, it cannot be done in a reasonable time, so the Monte Carlo simulation method was applied. The objective of this paper is to introduce and compare numerical approaches that involve different levels of knowledge about the noise structure of the analytical method used for initial and equilibrium concentration determination. Six levels of homoscedastic noise and five types of heteroscedastic noise precision models were considered. Performance of the methods was statistically evaluated based on median percentage error and mean absolute relative error in parameter estimates. The present study showed a clear distinction between two cases. When equilibrium experiments are performed only once, for the homoscedastic case, the winning error function is ordinary least squares, while for the case of heteroscedastic noise the use of orthogonal distance regression or Margart’s percent standard deviation is suggested. It was found that in case when experiments are repeated three times the simple method of weighted least squares performed as well as more complicated orthogonal distance regression method.

  19. A Variational Approach to the Modeling of MIMO Systems

    Directory of Open Access Journals (Sweden)

    A. Jraifi

    2007-05-01

    Full Text Available Motivated by the study of the optimization of the quality of service for multiple input multiple output (MIMO systems in 3G (third generation, we develop a method for modeling MIMO channel ℋ. This method, which uses a statistical approach, is based on a variational form of the usual channel equation. The proposed equation is given by δ2=〈δR|ℋ|δE〉+〈δR|(δℋ|E〉 with scalar variable δ=‖δR‖. Minimum distance δmin of received vectors |R〉 is used as the random variable to model MIMO channel. This variable is of crucial importance for the performance of the transmission system as it captures the degree of interference between neighbors vectors. Then, we use this approach to compute numerically the total probability of errors with respect to signal-to-noise ratio (SNR and then predict the numbers of antennas. By fixing SNR variable to a specific value, we extract informations on the optimal numbers of MIMO antennas.

  20. A Simplified Approach to Multivariable Model Predictive Control

    Directory of Open Access Journals (Sweden)

    Michael Short

    2015-01-01

    Full Text Available The benefits of applying the range of technologies generally known as Model Predictive Control (MPC to the control of industrial processes have been well documented in recent years. One of the principal drawbacks to MPC schemes are the relatively high on-line computational burdens when used with adaptive, constrained and/or multivariable processes, which has warranted some researchers and practitioners to seek simplified approaches for its implementation. To date, several schemes have been proposed based around a simplified 1-norm formulation of multivariable MPC, which is solved online using the simplex algorithm in both the unconstrained and constrained cases. In this paper a 2-norm approach to simplified multivariable MPC is formulated, which is solved online using a vector-matrix product or a simple iterative coordinate descent algorithm for the unconstrained and constrained cases respectively. A CARIMA model is employed to ensure offset-free control, and a simple scheme to produce the optimal predictions is described. A small simulation study and further discussions help to illustrate that this quadratic formulation performs well and can be considered a useful adjunct to its linear counterpart, and still retains the beneficial features such as ease of computer-based implementation.

  1. Thin inclusion approach for modelling of heterogeneous conducting materials

    Energy Technology Data Exchange (ETDEWEB)

    Lavrov, Nikolay [Davenport University, 4801 Oakman Boulevard, Dearborn, MI 48126 (United States); Smirnova, Alevtina; Gorgun, Haluk; Sammes, Nigel [University of Connecticut, Department of Materials Science and Engineering, Connecticut Global Fuel Center, 44 Weaver Road, Unit 5233, Storrs, CT 06269 (United States)

    2006-04-21

    Experimental data show that heterogeneous nanostructure of solid oxide and polymer electrolyte fuel cells could be approximated as an infinite set of fiber-like or penny-shaped inclusions in a continuous medium. Inclusions can be arranged in a cluster mode and regular or random order. In the newly proposed theoretical model of nanostructured material, the most attention is paid to the small aspect ratio of structural elements as well as to some model problems of electrostatics. The proposed integral equation for electric potential caused by the charge distributed over the single circular or elliptic cylindrical conductor of finite length, as a single unit of a nanostructured material, has been asymptotically simplified for the small aspect ratio and solved numerically. The result demonstrates that surface density changes slightly in the middle part of the thin domain and has boundary layers localized near the edges. It is anticipated, that contribution of boundary layer solution to the surface density is significant and cannot be governed by classic equation for smooth linear charge. The role of the cross-section shape is also investigated. Proposed approach is sufficiently simple, robust and allows extension to either regular or irregular system of various inclusions. This approach can be used for the development of the system of conducting inclusions, which are commonly present in nanostructured materials used for solid oxide and polymer electrolyte fuel cell (PEMFC) materials. (author)

  2. A Modeling Approach for Plastic-Metal Laser Direct Joining

    Science.gov (United States)

    Lutey, Adrian H. A.; Fortunato, Alessandro; Ascari, Alessandro; Romoli, Luca

    2017-09-01

    Laser processing has been identified as a feasible approach to direct joining of metal and plastic components without the need for adhesives or mechanical fasteners. The present work sees development of a modeling approach for conduction and transmission laser direct joining of these materials based on multi-layer optical propagation theory and numerical heat flow simulation. The scope of this methodology is to predict process outcomes based on the calculated joint interface and upper surface temperatures. Three representative cases are considered for model verification, including conduction joining of PBT and aluminum alloy, transmission joining of optically transparent PET and stainless steel, and transmission joining of semi-transparent PA 66 and stainless steel. Conduction direct laser joining experiments are performed on black PBT and 6082 anticorodal aluminum alloy, achieving shear loads of over 2000 N with specimens of 2 mm thickness and 25 mm width. Comparison with simulation results shows that consistently high strength is achieved where the peak interface temperature is above the plastic degradation temperature. Comparison of transmission joining simulations and published experimental results confirms these findings and highlights the influence of plastic layer optical absorption on process feasibility.

  3. Modelling the Heat Consumption in District Heating Systems using a Grey-box approach

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Madsen, Henrik

    2006-01-01

    identification of an overall model structure followed by data-based modelling, whereby the details of the model are identified. This approach is sometimes called grey-box modelling, but the specific approach used here does not require states to be specified. Overall, the paper demonstrates the power of the grey......-box approach. (c) 2005 Elsevier B.V. All rights reserved....

  4. Modelling the Heat Consumption in District Heating Systems using a Grey-box approach

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Madsen, Henrik

    2006-01-01

    identification of an overall model structure followed by data-based modelling, whereby the details of the model are identified. This approach is sometimes called grey-box modelling, but the specific approach used here does not require states to be specified. Overall, the paper demonstrates the power of the grey......-box approach. (c) 2005 Elsevier B.V. All rights reserved....

  5. Lithium battery aging model based on Dakin's degradation approach

    Science.gov (United States)

    Baghdadi, Issam; Briat, Olivier; Delétage, Jean-Yves; Gyan, Philippe; Vinassa, Jean-Michel

    2016-09-01

    This paper proposes and validates a calendar and power cycling aging model for two different lithium battery technologies. The model development is based on previous SIMCAL and SIMSTOCK project data. In these previous projects, the effect of the battery state of charge, temperature and current magnitude on aging was studied on a large panel of different battery chemistries. In this work, data are analyzed using Dakin's degradation approach. In fact, the logarithms of battery capacity fade and the increase in resistance evolves linearly over aging. The slopes identified from straight lines correspond to battery aging rates. Thus, a battery aging rate expression function of aging factors was deduced and found to be governed by Eyring's law. The proposed model simulates the capacity fade and resistance increase as functions of the influencing aging factors. Its expansion using Taylor series was consistent with semi-empirical models based on the square root of time, which are widely studied in the literature. Finally, the influence of the current magnitude and temperature on aging was simulated. Interestingly, the aging rate highly increases with decreasing and increasing temperature for the ranges of -5 °C-25 °C and 25 °C-60 °C, respectively.

  6. An inverse problem approach to modelling coastal effluent plumes

    Science.gov (United States)

    Lam, D. C. L.; Murthy, C. R.; Miners, K. C.

    Formulated as an inverse problem, the diffusion parameters associated with length-scale dependent eddy diffusivities can be viewed as the unknowns in the mass conservation equation for coastal zone transport problems. The values of the diffusion parameters can be optimized according to an error function incorporated with observed concentration data. Examples are given for the Fickian, shear diffusion and inertial subrange diffusion models. Based on a new set of dyeplume data collected in the coastal zone off Bronte, Lake Ontario, it is shown that the predictions of turbulence closure models can be evaluated for different flow conditions. The choice of computational schemes for this diagnostic approach is based on tests with analytic solutions and observed data. It is found that the optimized shear diffusion model produced a better agreement with observations for both high and low advective flows than, e.g., the unoptimized semi-empirical model, Ky=0.075 σy1.2, described by Murthy and Kenney.

  7. A fuzzy approach to the Weighted Overlap Dominance model

    DEFF Research Database (Denmark)

    Franco de los Rios, Camilo Andres; Hougaard, Jens Leth; Nielsen, Kurt

    2013-01-01

    Decision support models are required to handle the various aspects of multi-criteria decision problems in order to help the individual understand its possible solutions. In this sense, such models have to be capable of aggregating and exploiting different types of measurements and evaluations in ...... is presented for ordering and identifying the best alternatives under an interactive procedure that takes into account the natural imprecision and relevance of information....... in an interactive way, where input data can take the form of uniquely-graded or interval-valued information. Here we explore the Weighted Overlap Dominance (WOD) model from a fuzzy perspective and its outranking approach to decision support and multidimensional interval analysis. Firstly, imprecision measures...... are introduced for characterizing the type of uncertainty being expressed by intervals, examining at the same time how the WOD model handles both non-interval as well as interval data, and secondly, relevance degrees are proposed for obtaining a ranking over the alternatives. Hence, a complete methodology...

  8. Replacement model of city bus: A dynamic programming approach

    Science.gov (United States)

    Arifin, Dadang; Yusuf, Edhi

    2017-06-01

    This paper aims to develop a replacement model of city bus vehicles operated in Bandung City. This study is driven from real cases encountered by the Damri Company in the efforts to improve services to the public. The replacement model propounds two policy alternatives: First, to maintain or keep the vehicles, and second is to replace them with new ones taking into account operating costs, revenue, salvage value, and acquisition cost of a new vehicle. A deterministic dynamic programming approach is used to solve the model. The optimization process was heuristically executed using empirical data of Perum Damri. The output of the model is to determine the replacement schedule and the best policy if the vehicle has passed the economic life. Based on the results, the technical life of the bus is approximately 20 years old, while the economic life is an average of 9 (nine) years. It means that after the bus is operated for 9 (nine) years, managers should consider the policy of rejuvenation.

  9. Comparing large-scale computational approaches to epidemic modeling: Agent-based versus structured metapopulation models

    Directory of Open Access Journals (Sweden)

    Merler Stefano

    2010-06-01

    Full Text Available Abstract Background In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. Methods We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. Results The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age

  10. A computational toy model for shallow landslides: Molecular dynamics approach

    Science.gov (United States)

    Martelloni, Gianluca; Bagnoli, Franco; Massaro, Emanuele

    2013-09-01

    The aim of this paper is to propose a 2D computational algorithm for modeling the triggering and propagation of shallow landslides caused by rainfall. We used a molecular dynamics (MD) approach, similar to the discrete element method (DEM), that is suitable to model granular material and to observe the trajectory of a single particle, so to possibly identify its dynamical properties. We consider that the triggering of shallow landslides is caused by the decrease of the static friction along the sliding surface due to water infiltration by rainfall. Thence the triggering is caused by the two following conditions: (a) a threshold speed of the particles and (b) a condition on the static friction, between the particles and the slope surface, based on the Mohr-Coulomb failure criterion. The latter static condition is used in the geotechnical model to estimate the possibility of landslide triggering. The interaction force between particles is modeled, in the absence of experimental data, by means of a potential similar to the Lennard-Jones one. The viscosity is also introduced in the model and for a large range of values of the model's parameters, we observe a characteristic velocity pattern, with acceleration increments, typical of real landslides. The results of simulations are quite promising: the energy and time triggering distribution of local avalanches show a power law distribution, analogous to the observed Gutenberg-Richter and Omori power law distributions for earthquakes. Finally, it is possible to apply the method of the inverse surface displacement velocity [4] for predicting the failure time.

  11. Predicting future glacial lakes in Austria using different modelling approaches

    Science.gov (United States)

    Otto, Jan-Christoph; Helfricht, Kay; Prasicek, Günther; Buckel, Johannes; Keuschnig, Markus

    2017-04-01

    Glacier retreat is one of the most apparent consequences of temperature rise in the 20th and 21th centuries in the European Alps. In Austria, more than 240 new lakes have formed in glacier forefields since the Little Ice Age. A similar signal is reported from many mountain areas worldwide. Glacial lakes can constitute important environmental and socio-economic impacts on high mountain systems including water resource management, sediment delivery, natural hazards, energy production and tourism. Their development significantly modifies the landscape configuration and visual appearance of high mountain areas. Knowledge on the location, number and extent of these future lakes can be used to assess potential impacts on high mountain geo-ecosystems and upland-lowland interactions. Information on new lakes is critical to appraise emerging threads and potentials for society. The recent development of regional ice thickness models and their combination with high resolution glacier surface data allows predicting the topography below current glaciers by subtracting ice thickness from glacier surface. Analyzing these modelled glacier bed surfaces reveals overdeepenings that represent potential locations for future lakes. In order to predict the location of future glacial lakes below recent glaciers in the Austrian Alps we apply different ice thickness models using high resolution terrain data and glacier outlines. The results are compared and validated with ice thickness data from geophysical surveys. Additionally, we run the models on three different glacier extents provided by the Austrian Glacier Inventories from 1969, 1998 and 2006. Results of this historical glacier extent modelling are compared to existing glacier lakes and discussed focusing on geomorphological impacts on lake evolution. We discuss model performance and observed differences in the results in order to assess the approach for a realistic prediction of future lake locations. The presentation delivers

  12. Ice Shelf Modeling: A Cross-Polar Bayesian Statistical Approach

    Science.gov (United States)

    Kirchner, N.; Furrer, R.; Jakobsson, M.; Zwally, H. J.

    2010-12-01

    Ice streams interlink glacial terrestrial and marine environments: embedded in a grounded inland ice such as the Antarctic Ice Sheet or the paleo ice sheets covering extensive parts of the Eurasian and Amerasian Arctic respectively, ice streams are major drainage agents facilitating the discharge of substantial portions of continental ice into the ocean. At their seaward side, ice streams can either extend onto the ocean as floating ice tongues (such as the Drygalsky Ice Tongue/East Antarctica), or feed large ice shelves (as is the case for e.g. the Siple Coast and the Ross Ice Shelf/West Antarctica). The flow behavior of ice streams has been recognized to be intimately linked with configurational changes in their attached ice shelves; in particular, ice shelf disintegration is associated with rapid ice stream retreat and increased mass discharge from the continental ice mass, contributing eventually to sea level rise. Investigations of ice stream retreat mechanism are however incomplete if based on terrestrial records only: rather, the dynamics of ice shelves (and, eventually, the impact of the ocean on the latter) must be accounted for. However, since floating ice shelves leave hardly any traces behind when melting, uncertainty regarding the spatio-temporal distribution and evolution of ice shelves in times prior to instrumented and recorded observation is high, calling thus for a statistical modeling approach. Complementing ongoing large-scale numerical modeling efforts (Pollard & DeConto, 2009), we model the configuration of ice shelves by using a Bayesian Hiearchial Modeling (BHM) approach. We adopt a cross-polar perspective accounting for the fact that currently, ice shelves exist mainly along the coastline of Antarctica (and are virtually non-existing in the Arctic), while Arctic Ocean ice shelves repeatedly impacted the Arctic ocean basin during former glacial periods. Modeled Arctic ocean ice shelf configurations are compared with geological spatial

  13. An Approach to Enforcing Clark-Wilson Model in Role-based Access Control Model

    Institute of Scientific and Technical Information of China (English)

    LIANGBin; SHIWenchang; SUNYufang; SUNBo

    2004-01-01

    Using one security model to enforce another is a prospective solution to multi-policy support. In this paper, an approach to the enforcing Clark-Wilson data integrity model in the Role-based access control (RBAC) model is proposed. An enforcement construction with great feasibility is presented. In this construction, a direct way to enforce the Clark-Wilson model is provided, the corresponding relations among users, transformation procedures, and constrained data items are strengthened; the concepts of task and subtask are introduced to enhance the support to least-privilege. The proposed approach widens the applicability of RBAC. The theoretical foundation for adopting Clark-Wilson model in a RBAC system with small cost is offered to meet the requirements of multi-policy support and policy flexibility.

  14. Modeling quasi-static poroelastic propagation using an asymptotic approach

    Energy Technology Data Exchange (ETDEWEB)

    Vasco, D.W.

    2007-11-01

    Since the formulation of poroelasticity (Biot(1941)) and its reformulation (Rice & Cleary(1976)), there have been many efforts to solve the coupled system of equations. Perhaps because of the complexity of the governing equations, most of the work has been directed towards finding numerical solutions. For example, Lewis and co-workers published early papers (Lewis & Schrefler(1978); Lewis et al.(1991)Lewis, Schrefler, & Simoni) concerned with finite-element methods for computing consolidation, subsidence, and examining the importance of coupling. Other early work dealt with flow in a deformable fractured medium (Narasimhan & Witherspoon 1976); Noorishad et al.(1984)Noorishad, Tsang, & Witherspoon. This effort eventually evolved into a general numerical approach for modeling fluid flow and deformation (Rutqvist et al.(2002)Rutqvist, Wu, Tsang, & Bodvarsson). As a result of this and other work, numerous coupled, computer-based algorithms have emerged, typically falling into one of three categories: one-way coupling, loose coupling, and full coupling (Minkoff et al.(2003)Minkoff, Stone, Bryant, Peszynska, & Wheeler). In one-way coupling the fluid flow is modeled using a conventional numerical simulator and the resulting change in fluid pressures simply drives the deformation. In loosely coupled modeling distinct geomechanical and fluid flow simulators are run for a sequence of time steps and at the conclusion of each step information is passed between the simulators. In full coupling, the fluid flow and geomechanics equations are solved simultaneously at each time step (Lewis & Sukirman(1993); Lewis & Ghafouri(1997); Gutierrez & Lewis(2002)). One disadvantage of a purely numerical approach to solving the governing equations of poroelasticity is that it is not clear how the various parameters interact and influence the solution. Analytic solutions have an advantage in that respect; the relationship between the medium and fluid properties is clear from the form of the

  15. Modeling healthcare authorization and claim submissions using the openEHR dual-model approach

    Directory of Open Access Journals (Sweden)

    Freire Sergio M

    2011-10-01

    Full Text Available Abstract Background The TISS standard is a set of mandatory forms and electronic messages for healthcare authorization and claim submissions among healthcare plans and providers in Brazil. It is not based on formal models as the new generation of health informatics standards suggests. The objective of this paper is to model the TISS in terms of the openEHR archetype-based approach and integrate it into a patient-centered EHR architecture. Methods Three approaches were adopted to model TISS. In the first approach, a set of archetypes was designed using ENTRY subclasses. In the second one, a set of archetypes was designed using exclusively ADMIN_ENTRY and CLUSTERs as their root classes. In the third approach, the openEHR ADMIN_ENTRY is extended with classes designed for authorization and claim submissions, and an ISM_TRANSITION attribute is added to the COMPOSITION class. Another set of archetypes was designed based on this model. For all three approaches, templates were designed to represent the TISS forms. Results The archetypes based on the openEHR RM (Reference Model can represent all TISS data structures. The extended model adds subclasses and an attribute to the COMPOSITION class to represent information on authorization and claim submissions. The archetypes based on all three approaches have similar structures, although rooted in different classes. The extended openEHR RM model is more semantically aligned with the concepts involved in a claim submission, but may disrupt interoperability with other systems and the current tools must be adapted to deal with it. Conclusions Modeling the TISS standard by means of the openEHR approach makes it aligned with ISO recommendations and provides a solid foundation on which the TISS can evolve. Although there are few administrative archetypes available, the openEHR RM is expressive enough to represent the TISS standard. This paper focuses on the TISS but its results may be extended to other billing

  16. Hubbard Model Approach to X-ray Spectroscopy

    Science.gov (United States)

    Ahmed, Towfiq

    We have implemented a Hubbard model based first-principles approach for real-space calculations of x-ray spectroscopy, which allows one to study excited state electronic structure of correlated systems. Theoretical understanding of many electronic features in d and f electron systems remains beyond the scope of conventional density functional theory (DFT). In this work our main effort is to go beyond the local density approximation (LDA) by incorporating the Hubbard model within the real-space multiple-scattering Green's function (RSGF) formalism. Historically, the first theoretical description of correlated systems was published by Sir Neville Mott and others in 1937. They realized that the insulating gap and antiferromagnetism in the transition metal oxides are mainly caused by the strong on-site Coulomb interaction of the localized unfilled 3d orbitals. Even with the recent progress of first principles methods (e.g. DFT) and model Hamiltonian approaches (e.g., Hubbard-Anderson model), the electronic description of many of these systems remains a non-trivial combination of both. X-ray absorption near edge spectra (XANES) and x-ray emission spectra (XES) are very powerful spectroscopic probes for many electronic features near Fermi energy (EF), which are caused by the on-site Coulomb interaction of localized electrons. In this work we focus on three different cases of many-body effects due to the interaction of localized d electrons. Here, for the first time, we have applied the Hubbard model in the real-space multiple scattering (RSGF) formalism for the calculation of x-ray spectra of Mott insulators (e.g., NiO and MnO). Secondly, we have implemented in our RSGF approach a doping dependent self-energy that was constructed from a single-band Hubbard model for the over doped high-T c cuprate La2-xSrxCuO4. Finally our RSGF calculation of XANES is calculated with the spectral function from Lee and Hedin's charge transfer satellite model. For all these cases our

  17. A Dynamic Approach to Modeling Dependence Between Human Failure Events

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald Laurids [Idaho National Laboratory

    2015-09-01

    In practice, most HRA methods use direct dependence from THERP—the notion that error be- gets error, and one human failure event (HFE) may increase the likelihood of subsequent HFEs. In this paper, we approach dependence from a simulation perspective in which the effects of human errors are dynamically modeled. There are three key concepts that play into this modeling: (1) Errors are driven by performance shaping factors (PSFs). In this context, the error propagation is not a result of the presence of an HFE yielding overall increases in subsequent HFEs. Rather, it is shared PSFs that cause dependence. (2) PSFs have qualities of lag and latency. These two qualities are not currently considered in HRA methods that use PSFs. Yet, to model the effects of PSFs, it is not simply a matter of identifying the discrete effects of a particular PSF on performance. The effects of PSFs must be considered temporally, as the PSFs will have a range of effects across the event sequence. (3) Finally, there is the concept of error spilling. When PSFs are activated, they not only have temporal effects but also lateral effects on other PSFs, leading to emergent errors. This paper presents the framework for tying together these dynamic dependence concepts.

  18. Modeling of movement-related potentials using a fractal approach.

    Science.gov (United States)

    Uşakli, Ali Bülent

    2010-06-01

    In bio-signal applications, classification performance depends greatly on feature extraction, which is also the case for electroencephalogram (EEG) based applications. Feature extraction, and consequently classification of EEG signals is not an easy task due to their inherent low signal-to-noise ratios and artifacts. EEG signals can be treated as the output of a non-linear dynamical (chaotic) system in the human brain and therefore they can be modeled by their dimension values. In this study, the variance fractal dimension technique is suggested for the modeling of movement-related potentials (MRPs). Experimental data sets consist of EEG signals recorded during the movements of right foot up, lip pursing and a simultaneous execution of these two tasks. The experimental results and performance tests show that the proposed modeling method can efficiently be applied to MRPs especially in the binary approached brain computer interface applications aiming to assist severely disabled people such as amyotrophic lateral sclerosis patients in communication and/or controlling devices.

  19. Hybrid empirical--theoretical approach to modeling uranium adsorption

    Energy Technology Data Exchange (ETDEWEB)

    Hull, Larry C.; Grossman, Christopher; Fjeld, Robert A.; Coates, John T.; Elzerman, Alan W

    2004-05-01

    An estimated 330 metric tons of U are buried in the radioactive waste Subsurface Disposal Area (SDA) at the Idaho National Engineering and Environmental Laboratory (INEEL). An assessment of U transport parameters is being performed to decrease the uncertainty in risk and dose predictions derived from computer simulations of U fate and transport to the underlying Snake River Plain Aquifer. Uranium adsorption isotherms were measured for 14 sediment samples collected from sedimentary interbeds underlying the SDA. The adsorption data were fit with a Freundlich isotherm. The Freundlich n parameter is statistically identical for all 14 sediment samples and the Freundlich K{sub f} parameter is correlated to sediment surface area (r{sup 2}=0.80). These findings suggest an efficient approach to material characterization and implementation of a spatially variable reactive transport model that requires only the measurement of sediment surface area. To expand the potential applicability of the measured isotherms, a model is derived from the empirical observations by incorporating concepts from surface complexation theory to account for the effects of solution chemistry. The resulting model is then used to predict the range of adsorption conditions to be expected in the vadose zone at the SDA based on the range in measured pore water chemistry. Adsorption in the deep vadose zone is predicted to be stronger than in near-surface sediments because the total dissolved carbonate decreases with depth.

  20. Sulfur Deactivation of NOx Storage Catalysts: A Multiscale Modeling Approach

    Directory of Open Access Journals (Sweden)

    Rankovic N.

    2013-09-01

    Full Text Available Lean NOx Trap (LNT catalysts, a promising solution for reducing the noxious nitrogen oxide emissions from the lean burn and Diesel engines, are technologically limited by the presence of sulfur in the exhaust gas stream. Sulfur stemming from both fuels and lubricating oils is oxidized during the combustion event and mainly exists as SOx (SO2 and SO3 in the exhaust. Sulfur oxides interact strongly with the NOx trapping material of a LNT to form thermodynamically favored sulfate species, consequently leading to the blockage of NOx sorption sites and altering the catalyst operation. Molecular and kinetic modeling represent a valuable tool for predicting system behavior and evaluating catalytic performances. The present paper demonstrates how fundamental ab initio calculations can be used as a valuable source for designing kinetic models developed in the IFP Exhaust library, intended for vehicle simulations. The concrete example we chose to illustrate our approach was SO3 adsorption on the model NOx storage material, BaO. SO3 adsorption was described for various sites (terraces, surface steps and kinks and bulk for a closer description of a real storage material. Additional rate and sensitivity analyses provided a deeper understanding of the poisoning phenomena.

  1. Mobile phone use while driving: a hybrid modeling approach.

    Science.gov (United States)

    Márquez, Luis; Cantillo, Víctor; Arellana, Julián

    2015-05-01

    The analysis of the effects that mobile phone use produces while driving is a topic of great interest for the scientific community. There is consensus that using a mobile phone while driving increases the risk of exposure to traffic accidents. The purpose of this research is to evaluate the drivers' behavior when they decide whether or not to use a mobile phone while driving. For that, a hybrid modeling approach that integrates a choice model with the latent variable "risk perception" was used. It was found that workers and individuals with the highest education level are more prone to use a mobile phone while driving than others. Also, "risk perception" is higher among individuals who have been previously fined and people who have been in an accident or almost been in an accident. It was also found that the tendency to use mobile phones while driving increases when the traffic speed reduces, but it decreases when the fine increases. Even though the urgency of the phone call is the most important explanatory variable in the choice model, the cost of the fine is an important attribute in order to control mobile phone use while driving. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. A New Approach to Model Verification, Falsification and Selection

    Directory of Open Access Journals (Sweden)

    Andrew J. Buck

    2015-06-01

    Full Text Available This paper shows that a qualitative analysis, i.e., an assessment of the consistency of a hypothesized sign pattern for structural arrays with the sign pattern of the estimated reduced form, can always provide decisive insight into a model’s validity both in general and compared to other models. Qualitative analysis can show that it is impossible for some models to have generated the data used to estimate the reduced form, even though standard specification tests might show the model to be adequate. A partially specified structural hypothesis can be falsified by estimating as few as one reduced form equation. Zero restrictions in the structure can themselves be falsified. It is further shown how the information content of the hypothesized structural sign patterns can be measured using a commonly applied concept of statistical entropy. The lower the hypothesized structural sign pattern’s entropy, the more a priori information it proposes about the sign pattern of the estimated reduced form. As an hypothesized structural sign pattern has a lower entropy, it is more subject to type 1 error and less subject to type 2 error. Three cases illustrate the approach taken here.

  3. Modeling tropical river runoff:A time dependent approach

    Institute of Scientific and Technical Information of China (English)

    Rashmi Nigam; Sudhir Nigam; Sushil K.Mittal

    2014-01-01

    Forecasting of rainfall and subsequent river runoff is important for many operational problems and applications related to hydrol-ogy. Modeling river runoff often requires rigorous mathematical analysis of vast historical data to arrive at reasonable conclusions. In this paper we have applied the stochastic method to characterize and predict river runoff of the perennial Kulfo River in south-ern Ethiopia. The time series analysis based auto regressive integrated moving average (ARIMA) approach is applied to mean monthly runoff data with 10 and 20 years spans. The varying length of the input runoff data is shown to influence the forecasting efficiency of the stochastic process. Preprocessing of the runoff time series data indicated that the data do not follow a seasonal pattern. Our forecasts were made using parsimonious non seasonal ARIMA models and the results were compared to actual 10-year and 20-year mean monthly runoff data of the Kulfo River. Our results indicate that river runoff forecasts based upon the 10-year data are more accurate and efficient than the model based on the 20-year time series.

  4. A message passing approach for general epidemic models

    CERN Document Server

    Karrer, Brian

    2010-01-01

    In most models of the spread of disease over contact networks it is assumed that the probabilities of disease transmission and recovery from disease are constant in time. In real life, however, this is far from true. In many diseases, for instance, recovery occurs at about the same time after infection for all individuals, rather than at a constant rate. In this paper, we study a generalized version of the SIR (susceptible-infected-recovered) model of epidemic disease that allows for arbitrary nonuniform distributions of transmission and recovery times. Standard differential equation approaches cannot be used for this generalized model, but we show that the problem can be reformulated as a time-dependent message passing calculation on the appropriate contact network. The calculation is exact on trees (i.e., loopless networks) or locally tree-like networks (such as random graphs) in the large system size limit. On non-tree-like networks we show that the calculation gives a rigorous bound on the size of disease...

  5. Masked areas in shear peak statistics. A forward modeling approach

    Energy Technology Data Exchange (ETDEWEB)

    Bard, D.; Kratochvil, J. M.; Dawson, W.

    2016-03-09

    The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impact of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance.

  6. Evaluating two model reduction approaches for large scale hedonic models sensitive to omitted variables and multicollinearity

    DEFF Research Database (Denmark)

    Panduro, Toke Emil; Thorsen, Bo Jellesmark

    2014-01-01

    evaluate two common model reduction approaches in an empirical case. The first relies on a principal component analysis (PCA) used to construct new orthogonal variables, which are applied in the hedonic model. The second relies on a stepwise model reduction based on the variance inflation index and Akaike......’s information criteria. Our empirical application focuses on estimating the implicit price of forest proximity in a Danish case area, with a dataset containing 86 relevant variables. We demonstrate that the estimated implicit price for forest proximity, while positive in all models, is clearly sensitive...

  7. Modelling pathways to Rubisco degradation: a structural equation network modelling approach.

    Directory of Open Access Journals (Sweden)

    Catherine Tétard-Jones

    Full Text Available 'Omics analysis (transcriptomics, proteomics quantifies changes in gene/protein expression, providing a snapshot of changes in biochemical pathways over time. Although tools such as modelling that are needed to investigate the relationships between genes/proteins already exist, they are rarely utilised. We consider the potential for using Structural Equation Modelling to investigate protein-protein interactions in a proposed Rubisco protein degradation pathway using previously published data from 2D electrophoresis and mass spectrometry proteome analysis. These informed the development of a prior model that hypothesised a pathway of Rubisco Large Subunit and Small Subunit degradation, producing both primary and secondary degradation products. While some of the putative pathways were confirmed by the modelling approach, the model also demonstrated features that had not been originally hypothesised. We used Bayesian analysis based on Markov Chain Monte Carlo simulation to generate output statistics suggesting that the model had replicated the variation in the observed data due to protein-protein interactions. This study represents an early step in the development of approaches that seek to enable the full utilisation of information regarding the dynamics of biochemical pathways contained within proteomics data. As these approaches gain attention, they will guide the design and conduct of experiments that enable 'Omics modelling to become a common place practice within molecular biology.

  8. A Multiscale Approach for Modeling Oxygen Production by Adsorption

    Directory of Open Access Journals (Sweden)

    Pavone D.

    2013-10-01

    Full Text Available Oxygen production processes using adsorbents for application to CCS technologies (Carbon Capture and Storage offer potential cost benefits over classical cryogenics. In order to model adsorption processes an approach using three size scales has been developed. This work is being conducted in the framework of the DECARBit European research project. The first scale is at the size of the oxygen adsorption bed to be modelled as a vertical cylinder filled with pellets. Its length is 0.2 m (scale 10-1 m. The bed is homogeneous in the transversal direction so that the problem is 1D (independent variables t, x. The physics in the process include gas species (Cbk (t, x convection and dispersion, thermal convection and conduction (T(t, x and hydrodynamics (v(t, x. The gas constituents involved are N2, 02, CO2 and H2O. The second scale is at the size of the pellets that fill the adsorber and which are assumed to be of spherical shape with a typical radius of 5 mm (scale 10-3 m. The independent variable for the pellets is the radius “rp”. At a certain height (x down in the adsorber all the pellets are the same and are surrounded by the same gas composition but inside the pellets the concentrations may vary. The state variables for the inner part of the pellets are the gas concentrations Cpk(t, x, rp. The pellets are so small that they are assumed to have a uniform temperature. This leads to a 2D transient model for the pellets linked to the 1D transient model for the bulk. The third scale looks into the detailed structure of the pellets that are made of perovskite crystallites. The latter are assumed to be spherical. Oxygen adsorption occurs in the crystallites which have a radius of about 0.5 pm (scale 10-7 m. All the crystallites at the same radius in a pellet are supposed to behave the same and because they are spherical, the only independent variable for a crystallite located at (x, rp is its radius “rc”. The state variables for the crystallites

  9. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  10. METHODOLOGICAL APPROACH AND MODEL ANALYSIS FOR IDENTIFICATION OF TOURIST TRENDS

    Directory of Open Access Journals (Sweden)

    Neven Šerić

    2015-06-01

    Full Text Available The draw and diversity of the destination’s offer is an antecedent of the tourism visits growth. The destination supply differentiation is carried through new, specialised tourism products. The usual approach consists of forming specialised tourism products in accordance with the existing tourism destination image. Another approach, prevalent in practice of developed tourism destinations is based on innovating the destination supply through accordance with the global tourism trends. For this particular purpose, it is advisable to choose a monitoring and analysis method of tourism trends. The goal is to determine actual trends governing target markets, differentiating whims from trends during the tourism preseason. When considering the return on investment, modifying the destination’s tourism offer on the basis of a tourism whim is a risky endeavour, indeed. Adapting the destination’s supply to tourism whims can result in a shifted image, one that is unable to ensure a long term interest and tourist vacation growth. With regard to tourism trend research and based on the research conducted, an advisable model for evaluating tourism phenomena is proposed, one that determines whether tourism phenomena is a tourism trend or a tourism whim.

  11. A dynamic appearance descriptor approach to facial actions temporal modeling.

    Science.gov (United States)

    Jiang, Bihan; Valstar, Michel; Martinez, Brais; Pantic, Maja

    2014-02-01

    Both the configuration and the dynamics of facial expressions are crucial for the interpretation of human facial behavior. Yet to date, the vast majority of reported efforts in the field either do not take the dynamics of facial expressions into account, or focus only on prototypic facial expressions of six basic emotions. Facial dynamics can be explicitly analyzed by detecting the constituent temporal segments in Facial Action Coding System (FACS) Action Units (AUs)-onset, apex, and offset. In this paper, we present a novel approach to explicit analysis of temporal dynamics of facial actions using the dynamic appearance descriptor Local Phase Quantization from Three Orthogonal Planes (LPQ-TOP). Temporal segments are detected by combining a discriminative classifier for detecting the temporal segments on a frame-by-frame basis with Markov Models that enforce temporal consistency over the whole episode. The system is evaluated in detail over the MMI facial expression database, the UNBC-McMaster pain database, the SAL database, the GEMEP-FERA dataset in database-dependent experiments, in cross-database experiments using the Cohn-Kanade, and the SEMAINE databases. The comparison with other state-of-the-art methods shows that the proposed LPQ-TOP method outperforms the other approaches for the problem of AU temporal segment detection, and that overall AU activation detection benefits from dynamic appearance information.

  12. A Gaussian graphical model approach to climate networks

    Energy Technology Data Exchange (ETDEWEB)

    Zerenner, Tanja, E-mail: tanjaz@uni-bonn.de [Meteorological Institute, University of Bonn, Auf dem Hügel 20, 53121 Bonn (Germany); Friederichs, Petra; Hense, Andreas [Meteorological Institute, University of Bonn, Auf dem Hügel 20, 53121 Bonn (Germany); Interdisciplinary Center for Complex Systems, University of Bonn, Brühler Straße 7, 53119 Bonn (Germany); Lehnertz, Klaus [Department of Epileptology, University of Bonn, Sigmund-Freud-Straße 25, 53105 Bonn (Germany); Helmholtz Institute for Radiation and Nuclear Physics, University of Bonn, Nussallee 14-16, 53115 Bonn (Germany); Interdisciplinary Center for Complex Systems, University of Bonn, Brühler Straße 7, 53119 Bonn (Germany)

    2014-06-15

    Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately.

  13. A Unified Approach to Model-Based Planning and Execution

    Science.gov (United States)

    Muscettola, Nicola; Dorais, Gregory A.; Fry, Chuck; Levinson, Richard; Plaunt, Christian; Norvig, Peter (Technical Monitor)

    2000-01-01

    Writing autonomous software is complex, requiring the coordination of functionally and technologically diverse software modules. System and mission engineers must rely on specialists familiar with the different software modules to translate requirements into application software. Also, each module often encodes the same requirement in different forms. The results are high costs and reduced reliability due to the difficulty of tracking discrepancies in these encodings. In this paper we describe a unified approach to planning and execution that we believe provides a unified representational and computational framework for an autonomous agent. We identify the four main components whose interplay provides the basis for the agent's autonomous behavior: the domain model, the plan database, the plan running module, and the planner modules. This representational and problem solving approach can be applied at all levels of the architecture of a complex agent, such as Remote Agent. In the rest of the paper we briefly describe the Remote Agent architecture. The new agent architecture proposed here aims at achieving the full Remote Agent functionality. We then give the fundamental ideas behind the new agent architecture and point out some implication of the structure of the architecture, mainly in the area of reactivity and interaction between reactive and deliberative decision making. We conclude with related work and current status.

  14. A model-data based systems approach to process intensification

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    In recent years process intensification (PI) has attracted much interest as a potential means of process improvement to meet the demands, such as, for sustainable production. A variety of intensified equipment are being developed that potentially creates options to meet these demands...... for focused validation of only the promising candidates in the second-stage. This approach, however, would be limited to intensification based on “known” unit operations, unless the PI process synthesis/design is considered at a lower level of aggregation, namely the phenomena level. That is, the model....... Here, established procedures for computer aided molecular design is adopted since combination of phenomena to form unit operations with desired objectives is, in principle, similar to combining atoms to form molecules with desired properties. The concept of the phenomena-based synthesis/design method...

  15. Agents, Bayes, and Climatic Risks - a modular modelling approach

    Directory of Open Access Journals (Sweden)

    A. Haas

    2005-01-01

    Full Text Available When insurance firms, energy companies, governments, NGOs, and other agents strive to manage climatic risks, it is by no way clear what the aggregate outcome should and will be. As a framework for investigating this subject, we present the LAGOM model family. It is based on modules depicting learning social agents. For managing climate risks, our agents use second order probabilities and update them by means of a Bayesian mechanism while differing in priors and risk aversion. The interactions between these modules and the aggregate outcomes of their actions are implemented using further modules. The software system is implemented as a series of parallel processes using the CIAMn approach. It is possible to couple modules irrespective of the language they are written in, the operating system under which they are run, and the physical location of the machine.

  16. Benchmarking of computer codes and approaches for modeling exposure scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Seitz, R.R. [EG and G Idaho, Inc., Idaho Falls, ID (United States); Rittmann, P.D.; Wood, M.I. [Westinghouse Hanford Co., Richland, WA (United States); Cook, J.R. [Westinghouse Savannah River Co., Aiken, SC (United States)

    1994-08-01

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided.

  17. New Cutting Force Modeling Approach for Flat End Mill

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A new mechanistic cutting force model for flat end milling using the instantaneous cutting force coefficients is proposed. An in-depth analysis shows that the total cutting forces can be separated into two terms: a nominal component independent of the runout and a perturbation component induced by the runout. The instantaneous value of the nominal component is used to calibrate the cutting force coefficients. With the help of the perturbation component and the cutting force coeffcients obtained above, the cutter runout is identified.Based on simulation and experimental results, the validity of the identification approach is demonstrated. The advantage of the proposed method lies in that the calibration performed with data of one cutting test under a specific regime can be applied for a great range of cutting conditions.

  18. Correlations in a generalized elastic model: fractional Langevin equation approach.

    Science.gov (United States)

    Taloni, Alessandro; Chechkin, Aleksei; Klafter, Joseph

    2010-12-01

    The generalized elastic model (GEM) provides the evolution equation which governs the stochastic motion of several many-body systems in nature, such as polymers, membranes, and growing interfaces. On the other hand a probe (tracer) particle in these systems performs a fractional Brownian motion due to the spatial interactions with the other system's components. The tracer's anomalous dynamics can be described by a fractional Langevin equation (FLE) with a space-time correlated noise. We demonstrate that the description given in terms of GEM coincides with that furnished by the relative FLE, by showing that the correlation functions of the stochastic field obtained within the FLE framework agree with the corresponding quantities calculated from the GEM. Furthermore we show that the Fox H -function formalism appears to be very convenient to describe the correlation properties within the FLE approach.

  19. A Dynamic Linear Modeling Approach to Public Policy Change

    DEFF Research Database (Denmark)

    Loftis, Matthew; Mortensen, Peter Bjerre

    2017-01-01

    Theories of public policy change, despite their differences, converge on one point of strong agreement. The relationship between policy and its causes can and does change over time. This consensus yields numerous empirical implications, but our standard analytical tools are inadequate for testing...... them. As a result, the dynamic and transformative relationships predicted by policy theories have been left largely unexplored in time-series analysis of public policy. This paper introduces dynamic linear modeling (DLM) as a useful statistical tool for exploring time-varying relationships in public...... policy. The paper offers a detailed exposition of the DLM approach and illustrates its usefulness with a time series analysis of U.S. defense policy from 1957-2010. The results point the way for a new attention to dynamics in the policy process and the paper concludes with a discussion of how...

  20. Systems approaches to computational modeling of the oral microbiome

    Directory of Open Access Journals (Sweden)

    Dimiter V. Dimitrov

    2013-07-01

    Full Text Available Current microbiome research has generated tremendous amounts of data providing snapshots of molecular activity in a variety of organisms, environments, and cell types. However, turning this knowledge into whole system level of understanding on pathways and processes has proven to be a challenging task. In this review we highlight the applicability of bioinformatics and visualization techniques to large collections of data in order to better understand the information that contains related diet – oral microbiome – host mucosal transcriptome interactions. In particular we focus on systems biology of Porphyromonas gingivalis in the context of high throughput computational methods tightly integrated with translational systems medicine. Those approaches have applications for both basic research, where we can direct specific laboratory experiments in model organisms and cell cultures, to human disease, where we can validate new mechanisms and biomarkers for prevention and treatment of chronic disorders

  1. Modeling Educational Content: The Cognitive Approach of the PALO Language

    Directory of Open Access Journals (Sweden)

    M. Felisa Verdejo Maíllo

    2004-01-01

    Full Text Available This paper presents a reference framework to describe educational material. It introduces the PALO Language as a cognitive based approach to Educational Modeling Languages (EML. In accordance with recent trends for reusability and interoperability in Learning Technologies, EML constitutes an evolution of the current content-centered specifications of learning material, involving the description of learning processes and methods from a pedagogical and instructional perspective. The PALO Language, thus, provides a layer of abstraction for the description of learning material, including the description of learning activities, structure and scheduling. The framework makes use of domain and pedagogical ontologies as a reusable and maintainable way to represent and store instructional content, and to provide a pedagogical level of abstraction in the authoring process.

  2. IONONEST—A Bayesian approach to modeling the lower ionosphere

    Science.gov (United States)

    Martin, Poppy L.; Scaife, Anna M. M.; McKay, Derek; McCrea, Ian

    2016-08-01

    Obtaining high-resolution electron density height profiles for the D region of the ionosphere as a well-sampled function of time is difficult for most methods of ionospheric measurement. Here we present a new method of using multifrequency riometry data for producing D region height profiles via inverse methods. To obtain these profiles, we use the nested sampling technique, implemented through our code, IONONEST. We demonstrate this approach using new data from the Kilpisjärvi Atmospheric Imaging Receiver Array (KAIRA) instrument and consider two electron density models. We compare the recovered height profiles from the KAIRA data with those from incoherent scatter radar using data from the European Incoherent Scatter Facility (EISCAT) instrument and find that there is good agreement between the two techniques, allowing for instrumental differences.

  3. Using graph approach for managing connectivity in integrative landscape modelling

    Science.gov (United States)

    Rabotin, Michael; Fabre, Jean-Christophe; Libres, Aline; Lagacherie, Philippe; Crevoisier, David; Moussa, Roger

    2013-04-01

    In cultivated landscapes, a lot of landscape elements such as field boundaries, ditches or banks strongly impact water flows, mass and energy fluxes. At the watershed scale, these impacts are strongly conditionned by the connectivity of these landscape elements. An accurate representation of these elements and of their complex spatial arrangements is therefore of great importance for modelling and predicting these impacts.We developped in the framework of the OpenFLUID platform (Software Environment for Modelling Fluxes in Landscapes) a digital landscape representation that takes into account the spatial variabilities and connectivities of diverse landscape elements through the application of the graph theory concepts. The proposed landscape representation consider spatial units connected together to represent the flux exchanges or any other information exchanges. Each spatial unit of the landscape is represented as a node of a graph and relations between units as graph connections. The connections are of two types - parent-child connection and up/downstream connection - which allows OpenFLUID to handle hierarchical graphs. Connections can also carry informations and graph evolution during simulation is possible (connections or elements modifications). This graph approach allows a better genericity on landscape representation, a management of complex connections and facilitate development of new landscape representation algorithms. Graph management is fully operational in OpenFLUID for developers or modelers ; and several graph tools are available such as graph traversal algorithms or graph displays. Graph representation can be managed i) manually by the user (for example in simple catchments) through XML-based files in easily editable and readable format or ii) by using methods of the OpenFLUID-landr library which is an OpenFLUID library relying on common open-source spatial libraries (ogr vector, geos topologic vector and gdal raster libraries). Open

  4. A Neural Model of Face Recognition: a Comprehensive Approach

    Science.gov (United States)

    Stara, Vera; Montesanto, Anna; Puliti, Paolo; Tascini, Guido; Sechi, Cristina

    Visual recognition of faces is an essential behavior of humans: we have optimal performance in everyday life and just such a performance makes us able to establish the continuity of actors in our social life and to quickly identify and categorize people. This remarkable ability justifies the general interest in face recognition of researchers belonging to different fields and specially of designers of biometrical identification systems able to recognize the features of person's faces in a background. Due to interdisciplinary nature of this topic in this contribute we deal with face recognition through a comprehensive approach with the purpose to reproduce some features of human performance, as evidenced by studies in psychophysics and neuroscience, relevant to face recognition. This approach views face recognition as an emergent phenomenon resulting from the nonlinear interaction of a number of different features. For this reason our model of face recognition has been based on a computational system implemented through an artificial neural network. This synergy between neuroscience and engineering efforts allowed us to implement a model that had a biological plausibility, performed the same tasks as human subjects, and gave a possible account of human face perception and recognition. In this regard the paper reports on an experimental study of performance of a SOM-based neural network in a face recognition task, with reference both to the ability to learn to discriminate different faces, and to the ability to recognize a face already encountered in training phase, when presented in a pose or with an expression differing from the one present in the training context.

  5. Multi-Model approach to reconstruct the Mediterranean Freshwater Evolution

    Science.gov (United States)

    Simon, Dirk; Marzocchi, Alice; Flecker, Rachel; Lunt, Dan; Hilgen, Frits; Meijer, Paul

    2016-04-01

    Today the Mediterranean Sea is isolated from the global ocean by the Strait of Gibraltar. This restricted nature causes the Mediterranean basin to react more sensitively to climatic and tectonic related phenomena than the global ocean. Not just eustatic sea-level and regional river run-off, but also gateway tectonics and connectivity between sub-basins are leaving an enhanced fingerprint in its geological record. To understand its evolution, it is crucial to understand how these different effects are coupled. The Miocene-Pliocene sedimentary record of the Mediterranean shows alternations in composition and colour and has been astronomically tuned. Around the Miocene-Pliocene Boundary the most extreme changes occur in the Mediterranean Sea. About 6% of the salt in the global ocean deposited in the Mediterranean Region, forming an approximately 2 km thick salt layer, which is still present today. This extreme event is named the Messinian Salinity Crisis (MSC, 5.97-5.33 Ma). The gateway and climate evolution is not well constrained for this time, which makes it difficult to distinguish which of the above mentioned drivers might have triggered the MSC. We, therefore, decided to tackle this problem via a multi-model approach: (1) We calculate the Mediterranean freshwater evolution via 30 atmosphere-ocean-vegetation simulations (using HadCM3L), to which we fitted to a function, using a regression model. This allows us to directly relate the orbital curves to evaporation, precipitation and run off. The resulting freshwater evolution can be directly correlated to other sedimentary and proxy records in the late Miocene. (2) By feeding the new freshwater evolution curve into a box/budget model we can predict the salinity and strontium evolution of the Mediterranean for a certain Atlantic-Mediterranean gateway. (3) By comparing these results to the known salinity thresholds of gypsum and halite saturation of sea water, but also to the late Miocene Mediterranean strontium

  6. A model comparison approach shows stronger support for economic models of fertility decline.

    Science.gov (United States)

    Shenk, Mary K; Towner, Mary C; Kress, Howard C; Alam, Nurul

    2013-05-14

    The demographic transition is an ongoing global phenomenon in which high fertility and mortality rates are replaced by low fertility and mortality. Despite intense interest in the causes of the transition, especially with respect to decreasing fertility rates, the underlying mechanisms motivating it are still subject to much debate. The literature is crowded with competing theories, including causal models that emphasize (i) mortality and extrinsic risk, (ii) the economic costs and benefits of investing in self and children, and (iii) the cultural transmission of low-fertility social norms. Distinguishing between models, however, requires more comprehensive, better-controlled studies than have been published to date. We use detailed demographic data from recent fieldwork to determine which models produce the most robust explanation of the rapid, recent demographic transition in rural Bangladesh. To rigorously compare models, we use an evidence-based statistical approach using model selection techniques derived from likelihood theory. This approach allows us to quantify the relative evidence the data give to alternative models, even when model predictions are not mutually exclusive. Results indicate that fertility, measured as either total fertility or surviving children, is best explained by models emphasizing economic factors and related motivations for parental investment. Our results also suggest important synergies between models, implicating multiple causal pathways in the rapidity and degree of recent demographic transitions.

  7. Economic modelling approaches to cost estimates for the control of carbon dioxide emissions

    NARCIS (Netherlands)

    Zhang, Z.X.; Folmer, H.

    1998-01-01

    This article gives an assessment of the relative strengths and weaknesses of a variety of economic modelling approaches commonly used for cost estimates for limiting carbon emissions, including the ad hoc approach, dynamic optimization approach, input-output approach, macroeconomic approach, computa

  8. MATLAB/Simulink Based Study of Different Approaches Using Mathematical Model of Differential Equations

    National Research Council Canada - National Science Library

    Vijay Nehra

    2014-01-01

    .... The present paper addresses different approaches used to derive mathematical models of first and second order system, developing MATLAB script implementation and building a corresponding Simulink model...

  9. A mechanism-based approach to modeling ductile fracture.

    Energy Technology Data Exchange (ETDEWEB)

    Bammann, Douglas J.; Hammi, Youssef; Antoun, Bonnie R.; Klein, Patrick A.; Foulk, James W., III; McFadden, Sam X.

    2004-01-01

    Ductile fracture in metals has been observed to result from the nucleation, growth, and coalescence of voids. The evolution of this damage is inherently history dependent, affected by how time-varying stresses drive the formation of defect structures in the material. At some critically damaged state, the softening response of the material leads to strain localization across a surface that, under continued loading, becomes the faces of a crack in the material. Modeling localization of strain requires introduction of a length scale to make the energy dissipated in the localized zone well-defined. In this work, a cohesive zone approach is used to describe the post-bifurcation evolution of material within the localized zone. The relations are developed within a thermodynamically consistent framework that incorporates temperature and rate-dependent evolution relationships motivated by dislocation mechanics. As such, we do not prescribe the evolution of tractions with opening displacements across the localized zone a priori. The evolution of tractions is itself an outcome of the solution of particular, initial boundary value problems. The stress and internal state of the material at the point of bifurcation provides the initial conditions for the subsequent evolution of the cohesive zone. The models we develop are motivated by in-situ scanning electron microscopy of three-point bending experiments using 6061-T6 aluminum and 304L stainless steel, The in situ observations of the initiation and evolution of fracture zones reveal the scale over which the failure mechanisms act. In addition, these observations are essential for motivating the micromechanically-based models of the decohesion process that incorporate the effects of loading mode mixity, temperature, and loading rate. The response of these new cohesive zone relations is demonstrated by modeling the three-point bending configuration used for the experiments. In addition, we survey other methods with the potential

  10. Teaching EFL Writing: An Approach Based on the Learner's Context Model

    Science.gov (United States)

    Lin, Zheng

    2017-01-01

    This study aims to examine qualitatively a new approach to teaching English as a foreign language (EFL) writing based on the learner's context model. It investigates the context model-based approach in class and identifies key characteristics of the approach delivered through a four-phase teaching and learning cycle. The model collects research…

  11. Monte Carlo path sampling approach to modeling aeolian sediment transport

    Science.gov (United States)

    Hardin, E. J.; Mitasova, H.; Mitas, L.

    2011-12-01

    Coastal communities and vital infrastructure are subject to coastal hazards including storm surge and hurricanes. Coastal dunes offer protection by acting as natural barriers from waves and storm surge. During storms, these landforms and their protective function can erode; however, they can also erode even in the absence of storms due to daily wind and waves. Costly and often controversial beach nourishment and coastal construction projects are common erosion mitigation practices. With a more complete understanding of coastal morphology, the efficacy and consequences of anthropogenic activities could be better predicted. Currently, the research on coastal landscape evolution is focused on waves and storm surge, while only limited effort is devoted to understanding aeolian forces. Aeolian transport occurs when the wind supplies a shear stress that exceeds a critical value, consequently ejecting sand grains into the air. If the grains are too heavy to be suspended, they fall back to the grain bed where the collision ejects more grains. This is called saltation and is the salient process by which sand mass is transported. The shear stress required to dislodge grains is related to turbulent air speed. Subsequently, as sand mass is injected into the air, the wind loses speed along with its ability to eject more grains. In this way, the flux of saltating grains is itself influenced by the flux of saltating grains and aeolian transport becomes nonlinear. Aeolian sediment transport is difficult to study experimentally for reasons arising from the orders of magnitude difference between grain size and dune size. It is difficult to study theoretically because aeolian transport is highly nonlinear especially over complex landscapes. Current computational approaches have limitations as well; single grain models are mathematically simple but are computationally intractable even with modern computing power whereas cellular automota-based approaches are computationally efficient

  12. Do recommender systems benefit users? a modeling approach

    Science.gov (United States)

    Yeung, Chi Ho

    2016-04-01

    Recommender systems are present in many web applications to guide purchase choices. They increase sales and benefit sellers, but whether they benefit customers by providing relevant products remains less explored. While in many cases the recommended products are relevant to users, in other cases customers may be tempted to purchase the products only because they are recommended. Here we introduce a model to examine the benefit of recommender systems for users, and find that recommendations from the system can be equivalent to random draws if one always follows the recommendations and seldom purchases according to his or her own preference. Nevertheless, with sufficient information about user preferences, recommendations become accurate and an abrupt transition to this accurate regime is observed for some of the studied algorithms. On the other hand, we find that high estimated accuracy indicated by common accuracy metrics is not necessarily equivalent to high real accuracy in matching users with products. This disagreement between estimated and real accuracy serves as an alarm for operators and researchers who evaluate recommender systems merely with accuracy metrics. We tested our model with a real dataset and observed similar behaviors. Finally, a recommendation approach with improved accuracy is suggested. These results imply that recommender systems can benefit users, but the more frequently a user purchases the recommended products, the less relevant the recommended products are in matching user taste.

  13. A Systematic Approach to Modelling Change Processes in Construction Projects

    Directory of Open Access Journals (Sweden)

    Ibrahim Motawa

    2012-11-01

    Full Text Available Modelling change processes within construction projects isessential to implement changes efficiently. Incomplete informationon the project variables at the early stages of projects leads toinadequate knowledge of future states and imprecision arisingfrom ambiguity in project parameters. This lack of knowledge isconsidered among the main source of changes in construction.Change identification and evaluation, in addition to predictingits impacts on project parameters, can help in minimising thedisruptive effects of changes. This paper presents a systematicapproach to modelling change process within construction projectsthat helps improve change identification and evaluation. Theapproach represents the key decisions required to implementchanges. The requirements of an effective change processare presented first. The variables defined for efficient changeassessment and diagnosis are then presented. Assessmentof construction changes requires an analysis for the projectcharacteristics that lead to change and also analysis of therelationship between the change causes and effects. The paperconcludes that, at the early stages of a project, projects with a highlikelihood of change occurrence should have a control mechanismover the project characteristics that have high influence on theproject. It also concludes, for the relationship between changecauses and effects, the multiple causes of change should bemodelled in a way to enable evaluating the change effects moreaccurately. The proposed approach is the framework for tacklingsuch conclusions and can be used for evaluating change casesdepending on the available information at the early stages ofconstruction projects.

  14. Toward the design of sustainable biofuel landscapes: A modeling approach

    Science.gov (United States)

    Izaurralde, R. C.; Zhang, X.; Manowitz, D. H.; Sahajpal, R.

    2011-12-01

    Biofuel crops have emerged as promising feedstocks for advanced bioenergy production in the form of cellulosic ethanol and biodiesel. However, large-scale deployment of biofuel crops for energy production has the potential to conflict with food production and generate a myriad of environmental outcomes related to land and water resources (e.g., decreases in soil carbon storage, increased erosion, altered runoff, deterioration in water quality). In order to anticipate the possible impacts of biofuel crop production on food production systems and the environment and contribute to the design of sustainable biofuel landscapes, we developed a spatially-explicit integrated modeling framework (SEIMF) aimed at understanding, among other objectives, the complex interactions among land, water, and energy. The framework is a research effort of the DOE Great Lakes Bioenergy Research Center. The SEIMF has three components: (1) a GIS-based data analysis system, (2) the biogeochemical model EPIC (Environmental Policy Integrated Climate), and (3) an evolutionary multi-objective optimization algorithm for examining trade-offs between biofuel energy production and ecosystem responses. The SEIMF was applied at biorefinery scale to simulate biofuel production scenarios and the yield and environmental results were used to develop trade-offs, economic and life-cycle analyses. The SEIMF approach was also applied to test the hypothesis that growing perennial herbaceous species on marginal lands can satisfy a significant fraction of targeted demands while avoiding competition with food systems and maintaining ecosystem services.

  15. Modelling and simulating retail management practices: a first approach

    CERN Document Server

    Siebers, Peer-Olaf; Celia, Helen; Clegg, Chris

    2010-01-01

    Multi-agent systems offer a new and exciting way of understanding the world of work. We apply agent-based modeling and simulation to investigate a set of problems in a retail context. Specifically, we are working to understand the relationship between people management practices on the shop-floor and retail performance. Despite the fact we are working within a relatively novel and complex domain, it is clear that using an agent-based approach offers great potential for improving organizational capabilities in the future. Our multi-disciplinary research team has worked closely with one of the UK's top ten retailers to collect data and build an understanding of shop-floor operations and the key actors in a department (customers, staff, and managers). Based on this case study we have built and tested our first version of a retail branch agent-based simulation model where we have focused on how we can simulate the effects of people management practices on customer satisfaction and sales. In our experiments we hav...

  16. Forecasting wind-driven wildfires using an inverse modelling approach

    Directory of Open Access Journals (Sweden)

    O. Rios

    2014-06-01

    Full Text Available A technology able to rapidly forecast wildfire dynamics would lead to a paradigm shift in the response to emergencies, providing the Fire Service with essential information about the ongoing fire. This paper presents and explores a novel methodology to forecast wildfire dynamics in wind-driven conditions, using real-time data assimilation and inverse modelling. The forecasting algorithm combines Rothermel's rate of spread theory with a perimeter expansion model based on Huygens principle and solves the optimisation problem with a tangent linear approach and forward automatic differentiation. Its potential is investigated using synthetic data and evaluated in different wildfire scenarios. The results show the capacity of the method to quickly predict the location of the fire front with a positive lead time (ahead of the event in the order of 10 min for a spatial scale of 100 m. The greatest strengths of our method are lightness, speed and flexibility. We specifically tailor the forecast to be efficient and computationally cheap so it can be used in mobile systems for field deployment and operativeness. Thus, we put emphasis on producing a positive lead time and the means to maximise it.

  17. A mechanism-based approach for absorption modeling: the Gastro-Intestinal Transit Time (GITT) model.

    Science.gov (United States)

    Hénin, Emilie; Bergstrand, Martin; Standing, Joseph F; Karlsson, Mats O

    2012-06-01

    Absorption models used in the estimation of pharmacokinetic drug characteristics from plasma concentration data are generally empirical and simple, utilizing no prior information on gastro-intestinal (GI) transit patterns. Our aim was to develop and evaluate an estimation strategy based on a mechanism-based model for drug absorption, which takes into account the tablet movement through the GI transit. This work is an extension of a previous model utilizing tablet movement characteristics derived from magnetic marker monitoring (MMM) and pharmacokinetic data. The new approach, which replaces MMM data with a GI transit model, was evaluated in data sets where MMM data were available (felodipine) or not available (diclofenac). Pharmacokinetic profiles in both datasets were well described by the model according to goodness-of-fit plots. Visual predictive checks showed the model to give superior simulation properties compared with a standard empirical approach (first-order absorption rate + lag-time). This model represents a step towards an integrated mechanism-based NLME model, where the use of physiological knowledge and in vitro–in vivo correlation helps fully characterize PK and generate hypotheses for new formulations or specific populations.

  18. Evaluation of approaches focused on modelling of organic carbon stocks using the RothC model

    Science.gov (United States)

    Koco, Štefan; Skalský, Rastislav; Makovníková, Jarmila; Tarasovičová, Zuzana; Barančíková, Gabriela

    2014-05-01

    The aim of current efforts in the European area is the protection of soil organic matter, which is included in all relevant documents related to the protection of soil. The use of modelling of organic carbon stocks for anticipated climate change, respectively for land management can significantly help in short and long-term forecasting of the state of soil organic matter. RothC model can be applied in the time period of several years to centuries and has been tested in long-term experiments within a large range of soil types and climatic conditions in Europe. For the initialization of the RothC model, knowledge about the carbon pool sizes is essential. Pool size characterization can be obtained from equilibrium model runs, but this approach is time consuming and tedious, especially for larger scale simulations. Due to this complexity we search for new possibilities how to simplify and accelerate this process. The paper presents a comparison of two approaches for SOC stocks modelling in the same area. The modelling has been carried out on the basis of unique input of land use, management and soil data for each simulation unit separately. We modeled 1617 simulation units of 1x1 km grid on the territory of agroclimatic region Žitný ostrov in the southwest of Slovakia. The first approach represents the creation of groups of simulation units based on the evaluation of results for simulation unit with similar input values. The groups were created after the testing and validation of modelling results for individual simulation units with results of modelling the average values of inputs for the whole group. Tests of equilibrium model for interval in the range 5 t.ha-1 from initial SOC stock showed minimal differences in results comparing with result for average value of whole interval. Management inputs data from plant residues and farmyard manure for modelling of carbon turnover were also the same for more simulation units. Combining these groups (intervals of initial

  19. Genetic and Modeling Approaches Reveal Distinct Components of Impulsive Behavior.

    Science.gov (United States)

    Nautiyal, Katherine M; Wall, Melanie M; Wang, Shuai; Magalong, Valerie M; Ahmari, Susanne E; Balsam, Peter D; Blanco, Carlos; Hen, René

    2017-01-18

    Impulsivity is an endophenotype found in many psychiatric disorders including substance use disorders, pathological gambling, and attention deficit hyperactivity disorder. Two behavioral features often considered in impulsive behavior are behavioral inhibition (impulsive action) and delayed gratification (impulsive choice). However, the extent to which these behavioral constructs represent distinct facets of behavior with discrete biological bases is unclear. To test the hypothesis that impulsive action and impulsive choice represent statistically independent behavioral constructs in mice, we collected behavioral measures of impulsivity in a single cohort of mice using well-validated operant behavioral paradigms. Mice with manipulation of serotonin 1B receptor (5-HT1BR) expression were included as a model of disordered impulsivity. A factor analysis was used to characterize correlations between the measures of impulsivity and to identify covariates. Using two approaches, we dissociated impulsive action from impulsive choice. First, the absence of 5-HT1BRs caused increased impulsive action, but not impulsive choice. Second, based on an exploratory factor analysis, a two-factor model described the data well, with measures of impulsive action and choice separating into two independent factors. A multiple-indicator multiple-causes analysis showed that 5-HT1BR expression and sex were significant covariates of impulsivity. Males displayed increased impulsivity in both dimensions, whereas 5-HT1BR expression was a predictor of increased impulsive action only. These data support the conclusion that impulsive action and impulsive choice are distinct behavioral phenotypes with dissociable biological influences that can be modeled in mice. Our work may help inform better classification, diagnosis, and treatment of psychiatric disorders, which present with disordered impulsivity.Neuropsychopharmacology advance online publication, 18 January 2017; doi:10.1038/npp.2016.277.

  20. A developmental approach to learning causal models for cyber security

    Science.gov (United States)

    Mugan, Jonathan

    2013-05-01

    To keep pace with our adversaries, we must expand the scope of machine learning and reasoning to address the breadth of possible attacks. One approach is to employ an algorithm to learn a set of causal models that describes the entire cyber network and each host end node. Such a learning algorithm would run continuously on the system and monitor activity in real time. With a set of causal models, the algorithm could anticipate novel attacks, take actions to thwart them, and predict the second-order effects flood of information, and the algorithm would have to determine which streams of that flood were relevant in which situations. This paper will present the results of efforts toward the application of a developmental learning algorithm to the problem of cyber security. The algorithm is modeled on the principles of human developmental learning and is designed to allow an agent to learn about the computer system in which it resides through active exploration. Children are flexible learners who acquire knowledge by actively exploring their environment and making predictions about what they will find,1, 2 and our algorithm is inspired by the work of the developmental psychologist Jean Piaget.3 Piaget described how children construct knowledge in stages and learn new concepts on top of those they already know. Developmental learning allows our algorithm to focus on subsets of the environment that are most helpful for learning given its current knowledge. In experiments, the algorithm was able to learn the conditions for file exfiltration and use that knowledge to protect sensitive files.

  1. Hybrid Modelling Approach to Prairie hydrology: Fusing Data-driven and Process-based Hydrological Models

    Science.gov (United States)

    Mekonnen, B.; Nazemi, A.; Elshorbagy, A.; Mazurek, K.; Putz, G.

    2012-04-01

    Modeling the hydrological response in prairie regions, characterized by flat and undulating terrain, and thus, large non-contributing areas, is a known challenge. The hydrological response (runoff) is the combination of the traditional runoff from the hydrologically contributing area and the occasional overflow from the non-contributing area. This study provides a unique opportunity to analyze the issue of fusing the Soil and Water Assessment Tool (SWAT) and Artificial Neural Networks (ANNs) in a hybrid structure to model the hydrological response in prairie regions. A hybrid SWAT-ANN model is proposed, where the SWAT component and the ANN module deal with the effective (contributing) area and the non-contributing area, respectively. The hybrid model is applied to the case study of Moose Jaw watershed, located in southern Saskatchewan, Canada. As an initial exploration, a comparison between ANN and SWAT models is established based on addressing the daily runoff (streamflow) prediction accuracy using multiple error measures. This is done to identify the merits and drawbacks of each modeling approach. It has been found out that the SWAT model has better performance during the low flow periods but with degraded efficiency during periods of high flows. The case is different for the ANN model as ANNs exhibit improved simulation during high flow periods but with biased estimates during low flow periods. The modelling results show that the new hybrid SWAT-ANN model is capable of exploiting the strengths of both SWAT and ANN models in an integrated framrwork. The new hybrid SWAT-ANN model simulates daily runoff quite satisfactorily with NSE measures of 0.80 and 0.83 during calibration and validation periods, respectively. Furthermore, an experimental assessment was performed to identify the effects of the ANN training method on the performance of the hybrid model as well as the parametric identifiability. Overall, the results obtained in this study suggest that the fusion

  2. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    Science.gov (United States)

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  3. Physiological fidelity or model parsimony? The relative performance of reverse-toxicokinetic modeling approaches.

    Science.gov (United States)

    Rowland, Michael A; Perkins, Edward J; Mayo, Michael L

    2017-03-11

    Physiologically-based toxicokinetic (PBTK) models are often developed to facilitate in vitro to in vivo extrapolation (IVIVE) using a top-down, compartmental approach, favoring architectural simplicity over physiological fidelity despite the lack of general guidelines relating model design to dynamical predictions. Here we explore the impact of design choice (high vs. low fidelity) on chemical distribution throughout an animal's organ system. We contrast transient dynamics and steady states of three previously proposed PBTK models of varying complexity in response to chemical exposure. The steady states for each model were determined analytically to predict exposure conditions from tissue measurements. Steady state whole-body concentrations differ between models, despite identical environmental conditions, which originates from varying levels of physiological fidelity captured by the models. These differences affect the relative predictive accuracy of the inverted models used in exposure reconstruction to link effects-based exposure data with whole-organism response thresholds obtained from in vitro assay measurements. Our results demonstrate how disregarding physiological fideltiy in favor of simpler models affects the internal dynamics and steady state estimates for chemical accumulation within tissues, which, in turn, poses significant challenges for the exposure reconstruction efforts that underlie many IVIVE methods. Developing standardized systems-level models for ecological organisms would not only ensure predictive consistency among future modeling studies, but also ensure pragmatic extrapolation of in vivo effects from in vitro data or modeling exposure-response relationships.

  4. Reliability assessment using degradation models: bayesian and classical approaches

    Directory of Open Access Journals (Sweden)

    Marta Afonso Freitas

    2010-04-01

    Full Text Available Traditionally, reliability assessment of devices has been based on (accelerated life tests. However, for highly reliable products, little information about reliability is provided by life tests in which few or no failures are typically observed. Since most failures arise from a degradation mechanism at work for which there are characteristics that degrade over time, one alternative is monitor the device for a period of time and assess its reliability from the changes in performance (degradation observed during that period. The goal of this article is to illustrate how degradation data can be modeled and analyzed by using "classical" and Bayesian approaches. Four methods of data analysis based on classical inference are presented. Next we show how Bayesian methods can also be used to provide a natural approach to analyzing degradation data. The approaches are applied to a real data set regarding train wheels degradation.Tradicionalmente, o acesso à confiabilidade de dispositivos tem sido baseado em testes de vida (acelerados. Entretanto, para produtos altamente confiáveis, pouca informação a respeito de sua confiabilidade é fornecida por testes de vida no quais poucas ou nenhumas falhas são observadas. Uma vez que boa parte das falhas é induzida por mecanismos de degradação, uma alternativa é monitorar o dispositivo por um período de tempo e acessar sua confiabilidade através das mudanças em desempenho (degradação observadas durante aquele período. O objetivo deste artigo é ilustrar como dados de degradação podem ser modelados e analisados utilizando-se abordagens "clássicas" e Bayesiana. Quatro métodos de análise de dados baseados em inferência clássica são apresentados. A seguir, mostramos como os métodos Bayesianos podem também ser aplicados para proporcionar uma abordagem natural à análise de dados de degradação. As abordagens são aplicadas a um banco de dados real relacionado à degradação de rodas de trens.

  5. Space-Wise approach for airborne gravity data modelling

    Science.gov (United States)

    Sampietro, D.; Capponi, M.; Mansi, A. H.; Gatti, A.; Marchetti, P.; Sansò, F.

    2017-05-01

    Regional gravity field modelling by means of remove-compute-restore procedure is nowadays widely applied in different contexts: it is the most used technique for regional gravimetric geoid determination, and it is also used in exploration geophysics to predict grids of gravity anomalies (Bouguer, free-air, isostatic, etc.), which are useful to understand and map geological structures in a specific region. Considering this last application, due to the required accuracy and resolution, airborne gravity observations are usually adopted. However, due to the relatively high acquisition velocity, presence of atmospheric turbulence, aircraft vibration, instrumental drift, etc., airborne data are usually contaminated by a very high observation error. For this reason, a proper procedure to filter the raw observations in both the low and high frequencies should be applied to recover valuable information. In this work, a software to filter and grid raw airborne observations is presented: the proposed solution consists in a combination of an along-track Wiener filter and a classical Least Squares Collocation technique. Basically, the proposed procedure is an adaptation to airborne gravimetry of the Space-Wise approach, developed by Politecnico di Milano to process data coming from the ESA satellite mission GOCE. Among the main differences with respect to the satellite application of this approach, there is the fact that, while in processing GOCE data the stochastic characteristics of the observation error can be considered a-priori well known, in airborne gravimetry, due to the complex environment in which the observations are acquired, these characteristics are unknown and should be retrieved from the dataset itself. The presented solution is suited for airborne data analysis in order to be able to quickly filter and grid gravity observations in an easy way. Some innovative theoretical aspects focusing in particular on the theoretical covariance modelling are presented too

  6. Space-Wise approach for airborne gravity data modelling

    Science.gov (United States)

    Sampietro, D.; Capponi, M.; Mansi, A. H.; Gatti, A.; Marchetti, P.; Sansò, F.

    2016-12-01

    Regional gravity field modelling by means of remove-compute-restore procedure is nowadays widely applied in different contexts: it is the most used technique for regional gravimetric geoid determination, and it is also used in exploration geophysics to predict grids of gravity anomalies (Bouguer, free-air, isostatic, etc.), which are useful to understand and map geological structures in a specific region. Considering this last application, due to the required accuracy and resolution, airborne gravity observations are usually adopted. However, due to the relatively high acquisition velocity, presence of atmospheric turbulence, aircraft vibration, instrumental drift, etc., airborne data are usually contaminated by a very high observation error. For this reason, a proper procedure to filter the raw observations in both the low and high frequencies should be applied to recover valuable information. In this work, a software to filter and grid raw airborne observations is presented: the proposed solution consists in a combination of an along-track Wiener filter and a classical Least Squares Collocation technique. Basically, the proposed procedure is an adaptation to airborne gravimetry of the Space-Wise approach, developed by Politecnico di Milano to process data coming from the ESA satellite mission GOCE. Among the main differences with respect to the satellite application of this approach, there is the fact that, while in processing GOCE data the stochastic characteristics of the observation error can be considered a-priori well known, in airborne gravimetry, due to the complex environment in which the observations are acquired, these characteristics are unknown and should be retrieved from the dataset itself. The presented solution is suited for airborne data analysis in order to be able to quickly filter and grid gravity observations in an easy way. Some innovative theoretical aspects focusing in particular on the theoretical covariance modelling are presented too

  7. Box-wing model approach for solar radiation pressure modelling in a multi-GNSS scenario

    Science.gov (United States)

    Tobias, Guillermo; Jesús García, Adrián

    2016-04-01

    The solar radiation pressure force is the largest orbital perturbation after the gravitational effects and the major error source affecting GNSS satellites. A wide range of approaches have been developed over the years for the modelling of this non gravitational effect as part of the orbit determination process. These approaches are commonly divided into empirical, semi-analytical and analytical, where their main difference relies on the amount of knowledge of a-priori physical information about the properties of the satellites (materials and geometry) and their attitude. It has been shown in the past that the pre-launch analytical models fail to achieve the desired accuracy mainly due to difficulties in the extrapolation of the in-orbit optical and thermic properties, the perturbations in the nominal attitude law and the aging of the satellite's surfaces, whereas empirical models' accuracies strongly depend on the amount of tracking data used for deriving the models, and whose performances are reduced as the area to mass ratio of the GNSS satellites increases, as it happens for the upcoming constellations such as BeiDou and Galileo. This paper proposes to use basic box-wing model for Galileo complemented with empirical parameters, based on the limited available information about the Galileo satellite's geometry. The satellite is modelled as a box, representing the satellite bus, and a wing representing the solar panel. The performance of the model will be assessed for GPS, GLONASS and Galileo constellations. The results of the proposed approach have been analyzed over a one year period. In order to assess the results two different SRP models have been used. Firstly, the proposed box-wing model and secondly, the new CODE empirical model, ECOM2. The orbit performances of both models are assessed using Satellite Laser Ranging (SLR) measurements, together with the evaluation of the orbit prediction accuracy. This comparison shows the advantages and disadvantages of

  8. MDA-Based 3G Service Creation Approach and Telecom Service Domain Meta-Model

    Institute of Scientific and Technical Information of China (English)

    QIAO Xiu-quan; LI Xiao-feng; LI Yan

    2006-01-01

    This paper presents a model-driven 3G service creation approach based on model driven architecture technology.The focus of the paper is the methodology of designing telecommunication service-related meta-model and its profile implementation mechanism. This approach enhances the reusability of applications through separation of service logic models from concrete open application programming interface technologies and implementation technologies.

  9. A cost minimisation analysis in teledermatology: model-based approach

    Directory of Open Access Journals (Sweden)

    Eminović Nina

    2010-08-01

    Full Text Available Abstract Background Although store-and-forward teledermatology is increasingly becoming popular, evidence on its effects on efficiency and costs is lacking. The aim of this study, performed in addition to a clustered randomised trial, was to investigate to what extent and under which conditions store-and-forward teledermatology can reduce costs from a societal perspective. Methods A cost minimisation study design (a model based approach was applied to compare teledermatology and conventional process costs per dermatology patient care episode. Regarding the societal perspective, total mean costs of investment, general practitioner, dermatologists, out-of-pocket expenses and employer costs were calculated. Uncertainty analysis was performed using Monte Carlo simulation with 31 distributions in the used cost model. Scenario analysis was performed using one-way and two-way sensitivity analyses with the following variables: the patient travel distance to physician and dermatologist, the duration of teleconsultation activities, and the proportion of preventable consultations. Results Total mean costs of teledermatology process were €387 (95%CI, 281 to 502.5, while the total mean costs of conventional process costs were €354.0 (95%CI, 228.0 to 484.0. The total mean difference between the processes was €32.5 (95%CI, -29.0 to 74.7. Savings by teledermatology can be achieved if the distance to a dermatologist is larger (> = 75 km or when more consultations (> = 37% can be prevented due to teledermatology. Conclusions Teledermatology, when applied to all dermatology referrals, has a probability of 0.11 of being cost saving to society. In order to achieve cost savings by teledermatology, teledermatology should be applied in only those cases with a reasonable probability that a live consultation can be prevented. Trail Registration This study is performed partially based on PERFECT D Trial (Current Controlled Trials No.ISRCTN57478950.

  10. Dynamical system approach to running Λ cosmological models

    Energy Technology Data Exchange (ETDEWEB)

    Stachowski, Aleksander [Jagiellonian University, Astronomical Observatory, Krakow (Poland); Szydlowski, Marek [Jagiellonian University, Astronomical Observatory, Krakow (Poland); Jagiellonian University, Mark Kac Complex Systems Research Centre, Krakow (Poland)

    2016-11-15

    We study the dynamics of cosmological models with a time dependent cosmological term. We consider five classes of models; two with the non-covariant parametrization of the cosmological term Λ: Λ(H)CDM cosmologies, Λ(a)CDM cosmologies, and three with the covariant parametrization of Λ: Λ(R)CDM cosmologies, where R(t) is the Ricci scalar, Λ(φ)-cosmologies with diffusion, Λ(X)-cosmologies, where X = (1)/(2)g{sup αβ}∇{sub α}∇{sub β}φ is a kinetic part of the density of the scalar field. We also consider the case of an emergent Λ(a) relation obtained from the behaviour of trajectories in a neighbourhood of an invariant submanifold. In the study of the dynamics we used dynamical system methods for investigating how an evolutionary scenario can depend on the choice of special initial conditions. We show that the methods of dynamical systems allow one to investigate all admissible solutions of a running Λ cosmology for all initial conditions. We interpret Alcaniz and Lima's approach as a scaling cosmology. We formulate the idea of an emergent cosmological term derived directly from an approximation of the exact dynamics. We show that some non-covariant parametrization of the cosmological term like Λ(a), Λ(H) gives rise to the non-physical behaviour of trajectories in the phase space. This behaviour disappears if the term Λ(a) is emergent from the covariant parametrization. (orig.)

  11. Habitat fragmentation and reproductive success: a structural equation modelling approach.

    Science.gov (United States)

    Le Tortorec, Eric; Helle, Samuli; Käyhkö, Niina; Suorsa, Petri; Huhta, Esa; Hakkarainen, Harri

    2013-09-01

    1. There is great interest on the effects of habitat fragmentation, whereby habitat is lost and the spatial configuration of remaining habitat patches is altered, on individual breeding performance. However, we still lack consensus of how this important process affects reproductive success, and whether its effects are mainly due to reduced fecundity or nestling survival. 2. The main reason for this may be the way that habitat fragmentation has been previously modelled. Studies have treated habitat loss and altered spatial configuration as two independent processes instead of as one hierarchical and interdependent process, and therefore have not been able to consider the relative direct and indirect effects of habitat loss and altered spatial configuration. 3. We investigated how habitat (i.e. old forest) fragmentation, caused by intense forest harvesting at the territory and landscape scales, is associated with the number of fledged offspring of an area-sensitive passerine, the Eurasian treecreeper (Certhia familiaris). We used structural equation modelling (SEM) to examine the complex hierarchical associations between habitat loss and altered spatial configuration on the number of fledged offspring, by controlling for individual condition and weather conditions during incubation. 4. Against generally held expectations, treecreeper reproductive success did not show a significant association with habitat fragmentation measured at the territory scale. Instead, our analyses suggested that an increasing amount of habitat at the landscape scale caused a significant increase in nest predation rates, leading to reduced reproductive success. This effect operated directly on nest predation rates, instead of acting indirectly through altered spatial configuration. 5. Because habitat amount and configuration are inherently strongly collinear, particularly when multiple scales are considered, our study demonstrates the usefulness of a SEM approach for hierarchical partitioning

  12. A new approach for modelling variability in residential construction projects

    Directory of Open Access Journals (Sweden)

    Mehrdad Arashpour

    2013-06-01

    Full Text Available The construction industry is plagued by long cycle times caused by variability in the supply chain. Variations or undesirable situations are the result of factors such as non-standard practices, work site accidents, inclement weather conditions and faults in design. This paper uses a new approach for modelling variability in construction by linking relative variability indicators to processes. Mass homebuilding sector was chosen as the scope of the analysis because data is readily available. Numerous simulation experiments were designed by varying size of capacity buffers in front of trade contractors, availability of trade contractors, and level of variability in homebuilding processes. The measurements were shown to lead to an accurate determination of relationships between these factors and production parameters. The variability indicator was found to dramatically affect the tangible performance measures such as home completion rates. This study provides for future analysis of the production homebuilding sector, which may lead to improvements in performance and a faster product delivery to homebuyers. 

  13. A new approach for modelling variability in residential construction projects

    Directory of Open Access Journals (Sweden)

    Mehrdad Arashpour

    2013-06-01

    Full Text Available The construction industry is plagued by long cycle times caused by variability in the supply chain. Variations or undesirable situations are the result of factors such as non-standard practices, work site accidents, inclement weather conditions and faults in design. This paper uses a new approach for modelling variability in construction by linking relative variability indicators to processes. Mass homebuilding sector was chosen as the scope of the analysis because data is readily available. Numerous simulation experiments were designed by varying size of capacity buffers in front of trade contractors, availability of trade contractors, and level of variability in homebuilding processes. The measurements were shown to lead to an accurate determination of relationships between these factors and production parameters. The variability indicator was found to dramatically affect the tangible performance measures such as home completion rates. This study provides for future analysis of the production homebuilding sector, which may lead to improvements in performance and a faster product delivery to homebuyers.

  14. Discrete Variational Approach for Modeling Laser-Plasma Interactions

    Science.gov (United States)

    Reyes, J. Paxon; Shadwick, B. A.

    2014-10-01

    The traditional approach for fluid models of laser-plasma interactions begins by approximating fields and derivatives on a grid in space and time, leading to difference equations that are manipulated to create a time-advance algorithm. In contrast, by introducing the spatial discretization at the level of the action, the resulting Euler-Lagrange equations have particular differencing approximations that will exactly satisfy discrete versions of the relevant conservation laws. For example, applying a spatial discretization in the Lagrangian density leads to continuous-time, discrete-space equations and exact energy conservation regardless of the spatial grid resolution. We compare the results of two discrete variational methods using the variational principles from Chen and Sudan and Brizard. Since the fluid system conserves energy and momentum, the relative errors in these conserved quantities are well-motivated physically as figures of merit for a particular method. This work was supported by the U. S. Department of Energy under Contract No. DE-SC0008382 and by the National Science Foundation under Contract No. PHY-1104683.

  15. Study of nuclear clustering using the modern shell model approach

    Science.gov (United States)

    Volya, Alexander; Tchuvil'Sky, Yury

    2014-03-01

    Nuclear clustering, alpha decays, and multi-particle correlations are important components of nuclear dynamics. In this work we use the modern configuration-interaction approach with most advanced realistic shell-model Hamiltonians to study these questions. We utilize the algebraic many-nucleon structures and the corresponding fractional parentage coefficients to build the translationally invariant wave functions of the alpha-cluster channels. We explore the alpha spectroscopic factors, study the distribution of clustering strength, and discuss the structure of an effective 4-body operator describing the in-medium alpha dynamics in the multi-shell valence configuration space. Sensitivity of alpha clustering to the components of an effective Hamiltonian, which includes its collective and many-body components, as well as isospin symmetry breaking terms, are of interest. We offer effective techniques for evaluation of the cluster spectroscopic factors satisfying the orthogonality conditions of the respective cluster channels. We present a study of clustering phenomena, single-particle dynamics, and electromagnetic transitions for a number of nuclei in p-sd shells and compare our results with the experimentally available data. This work is supported by the U.S. Department of Energy under contract number DE-SC0009883.

  16. Dynamic energy budget approaches for modelling organismal ageing.

    Science.gov (United States)

    van Leeuwen, Ingeborg M M; Vera, Julio; Wolkenhauer, Olaf

    2010-11-12

    Ageing is a complex multifactorial process involving a progressive physiological decline that, ultimately, leads to the death of an organism. It involves multiple changes in many components that play fundamental roles under healthy and pathological conditions. Simultaneously, every organism undergoes accumulative 'wear and tear' during its lifespan, which confounds the effects of the ageing process. The scenario is complicated even further by the presence of both age-dependent and age-independent competing causes of death. Various manipulations have been shown to interfere with the ageing process. Calorie restriction, for example, has been reported to increase the lifespan of a wide range of organisms, which suggests a strong relation between energy metabolism and ageing. Such a link is also supported within the main theories for ageing: the free radical hypothesis, for instance, links oxidative damage production directly to energy metabolism. The Dynamic Energy Budgets (DEB) theory, which characterizes the uptake and use of energy by living organisms, therefore constitutes a useful tool for gaining insight into the ageing process. Here we compare the existing DEB-based modelling approaches and, then, discuss how new biological evidence could be incorporated within a DEB framework.

  17. Structure-based molecular modeling approaches to GPCR oligomerization.

    Science.gov (United States)

    Kaczor, Agnieszka A; Selent, Jana; Poso, Antti

    2013-01-01

    Classical structure-based drug design techniques using G-protein-coupled receptors (GPCRs) as targets focus nearly exclusively on binding at the orthosteric site of a single receptor. Dimerization and oligomerization of GPCRs, proposed almost 30 years ago, have, however, crucial relevance for drug design. Targeting these complexes selectively or designing small molecules that affect receptor-receptor interactions might provide new opportunities for novel drug discovery. In order to study the mechanisms and dynamics that rule GPCRs oligomerization, it is essential to understand the dynamic process of receptor-receptor association and to identify regions that are suitable for selective drug binding, which may be determined with experimental methods such as Förster resonance energy transfer (FRET) or Bioluminescence resonance energy transfer (BRET) and computational sequence- and structure-based approaches. The aim of this chapter is to provide a comprehensive description of the structure-based molecular modeling methods for studying GPCR dimerization, that is, protein-protein docking, molecular dynamics, normal mode analysis, and electrostatics studies.

  18. Systematic approach for the identification of process reference models

    CSIR Research Space (South Africa)

    Van Der Merwe, A

    2009-02-01

    Full Text Available Process models are used in different application domains to capture knowledge on the process flow. Process reference models (PRM) are used to capture reusable process models, which should simplify the identification process of process models...

  19. A structured and object oriented approach to training system modeling

    OpenAIRE

    Malysheva Elena Yuryevna; Bobrovsky Sergey Michailovich

    2015-01-01

    Structured Analysis and Object Oriented Analysis are widely adopted for system modelling. The article describes the examples of university training system modeling as examples of structured modeling and object-oriented modeling.

  20. BUSINESS MODEL IN ELECTRICITY INDUSTRY USING BUSINESS MODEL CANVAS APPROACH; THE CASE OF PT. XYZ

    Directory of Open Access Journals (Sweden)

    Achmad Arief Wicaksono

    2017-01-01

    Full Text Available The magnitude of opportunities and project values of electricity system in Indonesia encourages PT. XYZ to develop its business in electrical sector which requires business development strategies. This study aims to identify company's business model using Business Model Canvas approach, formulate business development strategy alternatives, and determine the prioritized business development strategy which is appropriate to the manufacturing business model for PT. XYZ. This study utilized a descriptive approach and the nine elements of the Business Model Canvas. Alternative formulation and priority determination of the strategies were obtained by using Strengths, Weaknesses, Opportunities, Threats (SWOT analysis and pairwise comparison. The results of this study are the improvement of Business Model Canvas on the elements of key resources, key activities, key partners and customer segment. In terms of SWOT analysis on the nine elements of the Business Model Canvas for the first business development, the results show an expansion on the power plant construction project as the main contractor, an increase in sales in its core business in supporting equipment industry of oil and gas,  a development in the second business i.e. an investment in the electricity sector as an independent renewable emery-based power producer. On its first business development, PT. XYZ selected three Business Model Canvas elements which become the priorities of the company i.e. key resources weighing 0.252, key activities weighing 0.240, and key partners weighing 0.231. On its second business development, the company selected three elements to become their the priorities i.e. key partners weighing 0.225, customer segments weighing 0.217, and key resources weighing 0.215.Keywords: business model canvas, SWOT, pairwise comparison, business model

  1. Risk evaluation of uranium mining: A geochemical inverse modelling approach

    Science.gov (United States)

    Rillard, J.; Zuddas, P.; Scislewski, A.

    2011-12-01

    It is well known that uranium extraction operations can increase risks linked to radiation exposure. The toxicity of uranium and associated heavy metals is the main environmental concern regarding exploitation and processing of U-ore. In areas where U mining is planned, a careful assessment of toxic and radioactive element concentrations is recommended before the start of mining activities. A background evaluation of harmful elements is important in order to prevent and/or quantify future water contamination resulting from possible migration of toxic metals coming from ore and waste water interaction. Controlled leaching experiments were carried out to investigate processes of ore and waste (leached ore) degradation, using samples from the uranium exploitation site located in Caetité-Bahia, Brazil. In experiments in which the reaction of waste with water was tested, we found that the water had low pH and high levels of sulphates and aluminium. On the other hand, in experiments in which ore was tested, the water had a chemical composition comparable to natural water found in the region of Caetité. On the basis of our experiments, we suggest that waste resulting from sulphuric acid treatment can induce acidification and salinization of surface and ground water. For this reason proper storage of waste is imperative. As a tool to evaluate the risks, a geochemical inverse modelling approach was developed to estimate the water-mineral interaction involving the presence of toxic elements. We used a method earlier described by Scislewski and Zuddas 2010 (Geochim. Cosmochim. Acta 74, 6996-7007) in which the reactive surface area of mineral dissolution can be estimated. We found that the reactive surface area of rock parent minerals is not constant during time but varies according to several orders of magnitude in only two months of interaction. We propose that parent mineral heterogeneity and particularly, neogenic phase formation may explain the observed variation of the

  2. A Comparison of Filter-based Approaches for Model-based Prognostics

    Data.gov (United States)

    National Aeronautics and Space Administration — Model-based prognostics approaches use domain knowledge about a system and its failure modes through the use of physics-based models. Model-based prognosis is...

  3. Effective use of integrated hydrological models in basin-scale water resources management: surrogate modeling approaches

    Science.gov (United States)

    Zheng, Y.; Wu, B.; Wu, X.

    2015-12-01

    Integrated hydrological models (IHMs) consider surface water and subsurface water as a unified system, and have been widely adopted in basin-scale water resources studies. However, due to IHMs' mathematical complexity and high computational cost, it is difficult to implement them in an iterative model evaluation process (e.g., Monte Carlo Simulation, simulation-optimization analysis, etc.), which diminishes their applicability for supporting decision-making in real-world situations. Our studies investigated how to effectively use complex IHMs to address real-world water issues via surrogate modeling. Three surrogate modeling approaches were considered, including 1) DYCORS (DYnamic COordinate search using Response Surface models), a well-established response surface-based optimization algorithm; 2) SOIM (Surrogate-based Optimization for Integrated surface water-groundwater Modeling), a response surface-based optimization algorithm that we developed specifically for IHMs; and 3) Probabilistic Collocation Method (PCM), a stochastic response surface approach. Our investigation was based on a modeling case study in the Heihe River Basin (HRB), China's second largest endorheic river basin. The GSFLOW (Coupled Ground-Water and Surface-Water Flow Model) model was employed. Two decision problems were discussed. One is to optimize, both in time and in space, the conjunctive use of surface water and groundwater for agricultural irrigation in the middle HRB region; and the other is to cost-effectively collect hydrological data based on a data-worth evaluation. Overall, our study results highlight the value of incorporating an IHM in making decisions of water resources management and hydrological data collection. An IHM like GSFLOW can provide great flexibility to formulating proper objective functions and constraints for various optimization problems. On the other hand, it has been demonstrated that surrogate modeling approaches can pave the path for such incorporation in real

  4. A Unified Component Modeling Approach for Performance Estimation in Hardware/Software Codesign

    DEFF Research Database (Denmark)

    Grode, Jesper Nicolai Riis; Madsen, Jan

    1998-01-01

    This paper presents an approach for abstract modeling of hardware/software architectures using Hierarchical Colored Petri Nets. The approach is able to capture complex behavioral characteristics often seen in software and hardware architectures, thus it is suitable for high level codesign issues...... such as performance estimation. In this paper, the development of a model of the ARM7 processor [5] is described to illustrate the full potential of the modeling approach. To further illustrate the approach, a cache model is also described. The approach and related tools are currently being implemented in the LYCOS...

  5. High-resolution urban flood modelling - a joint probability approach

    Science.gov (United States)

    Hartnett, Michael; Olbert, Agnieszka; Nash, Stephen

    2017-04-01

    The hydrodynamic modelling of rapid flood events due to extreme climatic events in urban environment is both a complex and challenging task. The horizontal resolution necessary to resolve complexity of urban flood dynamics is a critical issue; the presence of obstacles of varying shapes and length scales, gaps between buildings and the complex geometry of the city such as slopes affect flow paths and flood levels magnitudes. These small scale processes require a high resolution grid to be modelled accurately (2m or less, Olbert et al., 2015; Hunter et al., 2008; Brown et al., 2007) and, therefore, altimetry data of at least the same resolution. Along with availability of high-resolution LiDAR data and computational capabilities, as well as state of the art nested modelling approaches, these problems can now be overcome. Flooding and drying, domain definition, frictional resistance and boundary descriptions are all important issues to be addressed when modelling urban flooding. In recent years, the number of urban flood models dramatically increased giving a good insight into various modelling problems and solutions (Mark et al., 2004; Mason et al., 2007; Fewtrell et al., 2008; Shubert et al., 2008). Despite extensive modelling work conducted for fluvial (e.g. Mignot et al., 2006; Hunter et al., 2008; Yu and Lane, 2006) and coastal mechanisms of flooding (e.g. Gallien et al., 2011; Yang et al., 2012), the amount of investigations into combined coastal-fluvial flooding is still very limited (e.g. Orton et al., 2012; Lian et al., 2013). This is surprising giving the extent of flood consequences when both mechanisms occur simultaneously, which usually happens when they are driven by one process such as a storm. The reason for that could be the fact that the likelihood of joint event is much smaller than those of any of the two contributors occurring individually, because for fast moving storms the rainfall-driven fluvial flood arrives usually later than the storm surge

  6. Modeling of Agile Manufacturing Execution Systems with an Agent-based Approach

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Agile manufacturing execution systems (AMES) are used to help manufacturers optimize shop floor production in an agile way. And the modeling of AMES is the key issue of realizing AMES. This paper presents an agent-based approach to AMES modeling. Firstly, the characteristics of AMES and its requirements on modeling are discussed. Secondly, a comparative analysis of modeling methods is carried out, and AMES modeling using an agent-based approach is put forward. Agent-based modeling method not only inherit ...

  7. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  8. Optimising GPR modelling: A practical, multi-threaded approach to 3D FDTD numerical modelling

    Science.gov (United States)

    Millington, T. M.; Cassidy, N. J.

    2010-09-01

    The demand for advanced interpretational tools has lead to the development of highly sophisticated, computationally demanding, 3D GPR processing and modelling techniques. Many of these methods solve very large problems with stepwise methods that utilise numerically similar functions within iterative computational loops. Problems of this nature are readily parallelised by splitting the computational domain into smaller, independent chunks for direct use on cluster-style, multi-processor supercomputers. Unfortunately, the implications of running such facilities, as well as time investment needed to develop the parallel codes, means that for most researchers, the use of these advanced methods is too impractical. In this paper, we propose an alternative method of parallelisation which exploits the capabilities of the modern multi-core processors (upon which today's desktop PCs are built) by multi-threading the calculation of a problem's individual sub-solutions. To illustrate the approach, we have applied it to an advanced, 3D, finite-difference time-domain (FDTD) GPR modelling tool in which the calculation of the individual vector field components is multi-threaded. To be of practical use, the FDTD scheme must be able to deliver accurate results with short execution times and we, therefore, show that the performance benefits of our approach can deliver runtimes less than half those of the more conventional, serial programming techniques. We evaluate implementations of the technique using different programming languages (e.g., Matlab, Java, C++), which will facilitate the construction of a flexible modelling tool for use in future GPR research. The implementations are compared on a variety of typical hardware platforms, having between one and eight processing cores available, and also a modern Graphical Processing Unit (GPU)-based computer. Our results show that a multi-threaded xyz modelling approach is easy to implement and delivers excellent results when implemented

  9. Towards a Social Model Approach to Counselling Disabled Clients.

    Science.gov (United States)

    Swain, John; Griffiths, Carol; Heyman, Bob

    2003-01-01

    Explores the possible conflicts between counseling approaches that can individualize and personalize problems and disability as a political issue and illustrates the social construction of disability as an individualized problem within the counseling process. Considering the implications for counseling practice, argues for an approach to…

  10. A model based wireless monitoring approach for traffic noise

    NARCIS (Netherlands)

    Wessels, P.W.; Basten, T.G.H.; Eerden, F.J.M. van der

    2011-01-01

    In order to have a good understanding of the environmental acoustic effects of traffic it is important to perform long term monitoring within large areas. With traditional monitoring approaches this is quite unfeasible and the costs are relatively high. Within TNO a new wireless monitoring approach

  11. Fluid versus global model approach for the modeling of active species production by streamer discharge

    Science.gov (United States)

    Levko, Dmitry; Raja, Laxminarayan L.

    2017-03-01

    In this paper, we seek to validate the zero-dimensional (global) model approach for the modeling of the plasma composition in high pressure reactive streamer discharges. We focus on streamers typical of dielectric barrier discharge that are widely used, for instance, for plasma-assisted reforming of greenhouse gases. However, our conclusions can be extended to the streamers used in plasma-assisted ignition/combustion and other related systems. First, we perform two-dimensional fluid simulations for streamers with positive and negative trigger voltages and analyze the difference between the breakdown mechanisms of these two modes. Second, we use the time evolution of the electron heating term obtained from the fluid simulations as the input parameter of the global model and compare the plasma component content predicted by this model with the results of the fluid model. We obtain a very good agreement between fluid and global models for all species generated in plasma. However, we conclude that streamers initiated by the positive and negative trigger voltage cannot be considered as symmetrical which is usually done in global models of barrier discharge reactors.

  12. The economic value of CAD systems in structural design and construction: A modelling approach

    NARCIS (Netherlands)

    Chandansingh, R.A.

    1995-01-01

    A modelling approach is provided for the analysis of cost-effects of CAD systems. It aims to support strategic management of CAD systems in structural design and construction. The approach is based on the production digraph model of production processes, and the value-added model of information comm

  13. Serving many at once: How a database approach can create unity in dynamical ecosystem modelling

    NARCIS (Netherlands)

    Mooij, W.M.; Brederveld, R.J.; de Klein, J.J.M; DeAngelis, D.L.; Downing, Andrea; Faber, M.; Gerla, Daan J.; Hipsey, M.R.; 't Hoen, J.; Janse, J.H.; Janssen, A.B.G.; Jeuken, M.; Kooi, B.W.; Lischke, B.; Petzoldt, T.; Postma, L.; Schep, S.A.; Scholten, H.; Teurlincx, S.; Thiange, C.; Trolle, D.; van Dam, A.A.; Van Gerven, L.P.A.; Van Nes, E.H.; Kuiper, J.J.

    2014-01-01

    Simulation modelling in ecology is a field that is becoming increasingly compartmentalized. Here we propose a Database Approach To Modelling (DATM) to create unity in dynamical ecosystem modelling with differential equations. In this approach the storage of ecological knowledge is independent of the

  14. A toolkit modeling approach for sustainable forest management planning: achieving balance between science and local needs

    Science.gov (United States)

    Brian R. Sturtevant; Andrew Fall; Daniel D. Kneeshaw; Neal P. P. Simon; Michael J. Papaik; Kati Berninger; Frederik Doyon; Don G. Morgan; Christian Messier

    2007-01-01

    To assist forest managers in balancing an increasing diversity of resource objectives, we developed a toolkit modeling approach for sustainable forest management (SFM). The approach inserts a meta-modeling strategy into a collaborative modeling framework grounded in adaptive management philosophy that facilitates participation among stakeholders, decision makers, and...

  15. Serving many at once: How a database approach can create unity in dynamical ecosystem modelling

    NARCIS (Netherlands)

    Mooij, W.M.; Brederveld, R.J.; de Klein, J.J.M.; Deangelis, D.L.; Downing, A.S.; Faber, M.J.; Gerla, D.J.; Hipsey, M.R.; 't Hoen, J.; Janse, J.H.; Janssen, A.B.G.; Jeuken, M.; Kooi, B.W.; Lischke, B.; Petzoldt, T.; Postma, L.; Schep, S.A.; Scholten, H.; Teurlincx, S.; Thiange, C.; Trolle, D.; van Dam, A.A.; van Gerven, L.P.A.; van Nes, E.H.; Kuipers, J.

    2014-01-01

    Simulation modelling in ecology is a field that is becoming increasingly compartmentalized. Here we propose a Database Approach To Modelling (DATM) to create unity in dynamical ecosystem modelling with differential equations. In this approach the storage of ecological knowledge is independent of the

  16. An integrated approach for modelling of aircraft maintenance processes

    Directory of Open Access Journals (Sweden)

    D. Yu. Kiselev

    2015-01-01

    Full Text Available The paper deals with modeling of the processes of maintenance and repair of aircraft. The role of information in im-proving the effectiveness of maintenance systems is described. The methodology for functional modelling of maintenance processes is given. A simulation model is used for modelling possible changes.

  17. Numerical Modelling Approaches for Sediment Transport in Sewer Systems

    DEFF Research Database (Denmark)

    Mark, Ole

    A study of the sediment transport processes in sewers has been carried out. Based on this study a mathematical modelling system has been developed to describe the transport processes of sediments and dissolved matter in sewer systems. The modelling system consists of three sub-models which...... constitute the basic modelling system necessary to give a discription of the most dominant physical transport processes concerning particles and dissolved matter in sewer systems: A surface model. An advection-dispersion model. A sediment transport model....

  18. Computer simulation modeling of abnormal behavior: a program approach.

    Science.gov (United States)

    Reilly, K D; Freese, M R; Rowe, P B

    1984-07-01

    A need for modeling abnormal behavior on a comprehensive, systematic basis exists. Computer modeling and simulation tools offer especially good opportunities to establish such a program of studies. Issues concern deciding which modeling tools to use, how to relate models to behavioral data, what level of modeling to employ, and how to articulate theory to facilitate such modeling. Four levels or types of modeling, two qualitative and two quantitative, are identified. Their properties are examined and interrelated to include illustrative applications to the study of abnormal behavior, with an emphasis on schizophrenia.

  19. Modeling in applied sciences a kinetic theory approach

    CERN Document Server

    Pulvirenti, Mario

    2000-01-01

    Modeling complex biological, chemical, and physical systems, in the context of spatially heterogeneous mediums, is a challenging task for scientists and engineers using traditional methods of analysis Modeling in Applied Sciences is a comprehensive survey of modeling large systems using kinetic equations, and in particular the Boltzmann equation and its generalizations An interdisciplinary group of leading authorities carefully develop the foundations of kinetic models and discuss the connections and interactions between model theories, qualitative and computational analysis and real-world applications This book provides a thoroughly accessible and lucid overview of the different aspects, models, computations, and methodology for the kinetic-theory modeling process Topics and Features * Integrated modeling perspective utilized in all chapters * Fluid dynamics of reacting gases * Self-contained introduction to kinetic models * Becker–Doring equations * Nonlinear kinetic models with chemical reactions * Kinet...

  20. Analysis of Massive Emigration from Poland: The Model-Based Clustering Approach

    Science.gov (United States)

    Witek, Ewa

    The model-based approach assumes that data is generated by a finite mixture of probability distributions such as multivariate normal distributions. In finite mixture models, each component of probability distribution corresponds to a cluster. The problem of determining the number of clusters and choosing an appropriate clustering method becomes the problem of statistical model choice. Hence, the model-based approach provides a key advantage over heuristic clustering algorithms, because it selects both the correct model and the number of clusters.

  1. The threshold bias model: a mathematical model for the nomothetic approach of suicide.

    Directory of Open Access Journals (Sweden)

    Walter Sydney Dutra Folly

    Full Text Available BACKGROUND: Comparative and predictive analyses of suicide data from different countries are difficult to perform due to varying approaches and the lack of comparative parameters. METHODOLOGY/PRINCIPAL FINDINGS: A simple model (the Threshold Bias Model was tested for comparative and predictive analyses of suicide rates by age. The model comprises of a six parameter distribution that was applied to the USA suicide rates by age for the years 2001 and 2002. Posteriorly, linear extrapolations are performed of the parameter values previously obtained for these years in order to estimate the values corresponding to the year 2003. The calculated distributions agreed reasonably well with the aggregate data. The model was also used to determine the age above which suicide rates become statistically observable in USA, Brazil and Sri Lanka. CONCLUSIONS/SIGNIFICANCE: The Threshold Bias Model has considerable potential applications in demographic studies of suicide. Moreover, since the model can be used to predict the evolution of suicide rates based on information extracted from past data, it will be of great interest to suicidologists and other researchers in the field of mental health.

  2. Integrating operational watershed and coastal models for the Iberian Coast: Watershed model implementation - A first approach

    Science.gov (United States)

    Brito, David; Campuzano, F. J.; Sobrinho, J.; Fernandes, R.; Neves, R.

    2015-12-01

    River discharges and loads are essential inputs to coastal seas, and thus for coastal seas modelling, and their properties are the result of all activities and policies carried inland. For these reasons main rivers were object of intense monitoring programs having been generated some important amount of historical data. Due to the decline in the Portuguese hydrometric network and in order to quantify and forecast surface water streamflow and nutrients to coastal areas, the MOHID Land model was applied to the Western Iberia Region with a 2 km horizontal resolution and to the Iberian Peninsula with 10 km horizontal resolution. The domains were populated with land use and soil properties and forced with existing meteorological models. This approach also permits to understand how the flows and loads are generated and to forecast their values which are of utmost importance to perform coastal ocean and estuarine forecasts. The final purpose of the implementation is to obtain fresh water quantity and quality that could be used to support management decisions in the watershed, reservoirs and also to estuaries and coastal areas. A process oriented model as MOHID Land is essential to perform this type of simulations, as the model is independent of the number of river catchments. In this work, the Mohid Land model equations and parameterisations were described and an innovative methodology for watershed modelling is presented and validated for a large international river, the Tagus River, and the largest national river of Portugal, the Mondego River. Precipitation, streamflow and nutrients modelling results for these two rivers were compared with observations near their coastal outlet in order to evaluate the model capacity to represent the main watershed trends. Finally, an annual budget of fresh water and nutrient transported by the main twenty five rivers discharging in the Portuguese coast is presented.

  3. METHODOLOGICAL APPROACH AND MODEL ANALYSIS FOR IDENTIFICATION OF TOURIST TRENDS

    OpenAIRE

    Neven Šerić; Marijana Jurišić

    2015-01-01

    The draw and diversity of the destination’s offer is an antecedent of the tourism visits growth. The destination supply differentiation is carried through new, specialised tourism products. The usual approach consists of forming specialised tourism products in accordance with the existing tourism destination image. Another approach, prevalent in practice of developed tourism destinations is based on innovating the destination supply through accordance with the global tourism trends. For this ...

  4. Approaches to verification of two-dimensional water quality models

    Energy Technology Data Exchange (ETDEWEB)

    Butkus, S.R. (Tennessee Valley Authority, Chattanooga, TN (USA). Water Quality Dept.)

    1990-11-01

    The verification of a water quality model is the one procedure most needed by decision making evaluating a model predictions, but is often not adequate or done at all. The results of a properly conducted verification provide the decision makers with an estimate of the uncertainty associated with model predictions. Several statistical tests are available for quantifying of the performance of a model. Six methods of verification were evaluated using an application of the BETTER two-dimensional water quality model for Chickamauga reservoir. Model predictions for ten state variables were compared to observed conditions from 1989. Spatial distributions of the verification measures showed the model predictions were generally adequate, except at a few specific locations in the reservoir. The most useful statistics were the mean standard error of the residuals. Quantifiable measures of model performance should be calculated during calibration and verification of future applications of the BETTER model. 25 refs., 5 figs., 7 tabs.

  5. A Boolean Approach to Airline Business Model Innovation

    DEFF Research Database (Denmark)

    Hvass, Kristian Anders

    Research in business model innovation has identified its significance in creating a sustainable competitive advantage for a firm, yet there are few empirical studies identifying which combination of business model activities lead to success and therefore deserve innovative attention. This study...... innovation, introduce Boolean minimization methods to the field, and propose alternative business model activities to North American carriers striving for positive operating results....... analyzes the business models of North America low-cost carriers from 2001 to 2010 using a Boolean minimization algorithm to identify which combinations of business model activities lead to operational profitability. The research aim is threefold: complement airline literature in the realm of business model...

  6. Probabilistic modelling in urban drainage – two approaches that explicitly account for temporal variation of model errors

    DEFF Research Database (Denmark)

    Löwe, Roland; Del Giudice, Dario; Mikkelsen, Peter Steen

    to observations. After a brief discussion of the assumptions made for likelihood-based parameter inference, we illustrated the basic principles of both approaches on the example of sewer flow modelling with a conceptual rainfallrunoff model. The results from a real-world case study suggested that both approaches...

  7. Kinetic modelling of RDF pyrolysis: Model-fitting and model-free approaches.

    Science.gov (United States)

    Çepelioğullar, Özge; Haykırı-Açma, Hanzade; Yaman, Serdar

    2016-02-01

    In this study, refuse derived fuel (RDF) was selected as solid fuel and it was pyrolyzed in a thermal analyzer from room temperature to 900°C at heating rates of 5, 10, 20, and 50°C/min in N2 atmosphere. The obtained thermal data was used to calculate the kinetic parameters using Coats-Redfern, Friedman, Flylnn-Wall-Ozawa (FWO) and Kissinger-Akahira-Sunose (KAS) methods. As a result of Coats-Redfern model, decomposition process was assumed to be four independent reactions with different reaction orders. On the other hand, model free methods demonstrated that activation energy trend had similarities for the reaction progresses of 0.1, 0.2-0.7 and 0.8-0.9. The average activation energies were found between 73-161kJ/mol and it is possible to say that FWO and KAS models produced closer results to the average activation energies compared to Friedman model. Experimental studies showed that RDF may be a sustainable and promising feedstock for alternative processes in terms of waste management strategies.

  8. MODELLING OF AIR CONDITIONING SYSTEM BY FUZZY LOGIC APPROACH

    Directory of Open Access Journals (Sweden)

    Ahmet ÖZEK

    2004-03-01

    Full Text Available One of the main problems in control systems is the difficulty to form the mathematical model associated with the control mechanism. Even though this model can be formed, to realize the application with conventional logic may cause very complex problems. The fuzzy logic without using mathematical model of control system can create control mechanism only with the help of linguistic variables. In this article the modeling has been realized by fuzzy logic.

  9. A new approach for estimating the efficiencies of the nucleotide substitution models.

    Science.gov (United States)

    Som, Anup

    2007-04-01

    In this article, a new approach is presented for estimating the efficiencies of the nucleotide substitution models in a four-taxon case and then this approach is used to estimate the relative efficiencies of six substitution models under a wide variety of conditions. In this approach, efficiencies of the models are estimated by using a simple probability distribution theory. To assess the accuracy of the new approach, efficiencies of the models are also estimated by using the direct estimation method. Simulation results from the direct estimation method confirmed that the new approach is highly accurate. The success of the new approach opens a unique opportunity to develop analytical methods for estimating the relative efficiencies of the substitution models in a straightforward way.

  10. Teachers' Development Model to Authentic Assessment by Empowerment Evaluation Approach

    Science.gov (United States)

    Charoenchai, Charin; Phuseeorn, Songsak; Phengsawat, Waro

    2015-01-01

    The purposes of this study were 1) Study teachers authentic assessment, teachers comprehension of authentic assessment and teachers needs for authentic assessment development. 2) To create teachers development model. 3) Experiment of teachers development model. 4) Evaluate effectiveness of teachers development model. The research is divided into 4…

  11. A transformation approach to modelling multi-modal diffusions

    DEFF Research Database (Denmark)

    Forman, Julie Lyng; Sørensen, Michael

    2014-01-01

    This paper demonstrates that flexible and statistically tractable multi-modal diffusion models can be attained by transformation of simple well-known diffusion models such as the Ornstein–Uhlenbeck model, or more generally a Pearson diffusion. The transformed diffusion inherits many properties...

  12. The social relations model for family data : A multilevel approach

    NARCIS (Netherlands)

    Snijders, TAB; Kenny, DA

    1999-01-01

    Multilevel models are proposed to study relational or dyadic data from multiple persons in families or other groups. The variable under study is assumed to refer to a dyadic relation between individuals in the groups. The proposed models are elaborations of the Social Relations Model. The different

  13. Required Collaborative Work in Online Courses: A Predictive Modeling Approach

    Science.gov (United States)

    Smith, Marlene A.; Kellogg, Deborah L.

    2015-01-01

    This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…

  14. Some Asymptotic Inference in Multinomial Nonlinear Models (a Geometric Approach)

    Institute of Scientific and Technical Information of China (English)

    WEIBOCHENG

    1996-01-01

    A geometric framework is proposed for multinomlat nonlinear modelsbased on a modified vemlon of the geometric structure presented by Bates & Watts[4]. We use this geometric framework to study some asymptotic inference in terms ofcurvtures for multlnomial nonlinear models. Our previous results [15] for ordlnary nonlinear regression models are extended to multlnomlal nonlinear models.

  15. Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update

    NARCIS (Netherlands)

    Lubke, Gitta H.; Campbell, Ian; McArtor, Dan; Miller, Patrick; Luningham, Justin; van den Berg, Stéphanie Martine

    2017-01-01

    Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the

  16. Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update

    NARCIS (Netherlands)

    Lubke, Gitta H.; Campbell, Ian; McArtor, Dan; Miller, Patrick; Luningham, Justin; Berg, van den Stephanie M.

    2016-01-01

    Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the

  17. Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update

    NARCIS (Netherlands)

    Lubke, Gitta H.; Campbell, Ian; McArtor, Dan; Miller, Patrick; Luningham, Justin; Berg, van den Stephanie M.

    2017-01-01

    Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the

  18. An Analytics Approach to Adaptive Maturity Models using Organizational Characteristics

    NARCIS (Netherlands)

    Baars, T.; Mijnhardt, F.; Vlaanderen, K.; Spruit, M.

    2016-01-01

    Ever since the first incarnations of maturity models, critics have voiced several concerns with these frameworks. Indeed, a lack of model fit and oversimplification of the real world can be attributed to the rigidity of these models, which assumes that each organization that uses the framework is

  19. A Structural Equation Approach to Models with Spatial Dependence

    NARCIS (Netherlands)

    Oud, J.H.L.; Folmer, H.

    2008-01-01

    We introduce the class of structural equation models (SEMs) and corresponding estimation procedures into a spatial dependence framework. SEM allows both latent and observed variables within one and the same (causal) model. Compared with models with observed variables only, this feature makes it poss

  20. A structural equation approach to models with spatial dependence

    NARCIS (Netherlands)

    Oud, J.H.L.; Folmer, H.

    2008-01-01

    We introduce the class of structural equation models (SEMs) and corresponding estimation procedures into a spatial dependence framework. SEM allows both latent and observed variables within one and the same (causal) model. Compared with models with observed variables only, this feature makes it poss