WorldWideScience

Sample records for component-based modelling approach

  1. Exploring component-based approaches in forest landscape modeling

    Science.gov (United States)

    H. S. He; D. R. Larsen; D. J. Mladenoff

    2002-01-01

    Forest management issues are increasingly required to be addressed in a spatial context, which has led to the development of spatially explicit forest landscape models. The numerous processes, complex spatial interactions, and diverse applications in spatial modeling make the development of forest landscape models difficult for any single research group. New...

  2. Generalized structured component analysis a component-based approach to structural equation modeling

    CERN Document Server

    Hwang, Heungsun

    2014-01-01

    Winner of the 2015 Sugiyama Meiko Award (Publication Award) of the Behaviormetric Society of Japan Developed by the authors, generalized structured component analysis is an alternative to two longstanding approaches to structural equation modeling: covariance structure analysis and partial least squares path modeling. Generalized structured component analysis allows researchers to evaluate the adequacy of a model as a whole, compare a model to alternative specifications, and conduct complex analyses in a straightforward manner. Generalized Structured Component Analysis: A Component-Based Approach to Structural Equation Modeling provides a detailed account of this novel statistical methodology and its various extensions. The authors present the theoretical underpinnings of generalized structured component analysis and demonstrate how it can be applied to various empirical examples. The book enables quantitative methodologists, applied researchers, and practitioners to grasp the basic concepts behind this new a...

  3. A systematic approach for component-based software development

    NARCIS (Netherlands)

    Guareis de farias, Cléver; van Sinderen, Marten J.; Ferreira Pires, Luis

    2000-01-01

    Component-based software development enables the construction of software artefacts by assembling prefabricated, configurable and independently evolving building blocks, called software components. This paper presents an approach for the development of component-based software artefacts. This

  4. A Combined Approach for Component-based Software Design

    NARCIS (Netherlands)

    Guareis de farias, Cléver; van Sinderen, Marten J.; Ferreira Pires, Luis; Quartel, Dick; Baldoni, R.

    2001-01-01

    Component-based software development enables the construction of software artefacts by assembling binary units of production, distribution and deployment, the so-called software components. Several approaches addressing component-based development have been proposed recently. Most of these

  5. Component-Based Approach in Learning Management System Development

    Science.gov (United States)

    Zaitseva, Larisa; Bule, Jekaterina; Makarov, Sergey

    2013-01-01

    The paper describes component-based approach (CBA) for learning management system development. Learning object as components of e-learning courses and their metadata is considered. The architecture of learning management system based on CBA being developed in Riga Technical University, namely its architecture, elements and possibilities are…

  6. Algorithmic fault tree construction by component-based system modeling

    International Nuclear Information System (INIS)

    Majdara, Aref; Wakabayashi, Toshio

    2008-01-01

    Computer-aided fault tree generation can be easier, faster and less vulnerable to errors than the conventional manual fault tree construction. In this paper, a new approach for algorithmic fault tree generation is presented. The method mainly consists of a component-based system modeling procedure an a trace-back algorithm for fault tree synthesis. Components, as the building blocks of systems, are modeled using function tables and state transition tables. The proposed method can be used for a wide range of systems with various kinds of components, if an inclusive component database is developed. (author)

  7. Integration of Simulink Models with Component-based Software Models

    DEFF Research Database (Denmark)

    Marian, Nicolae

    2008-01-01

    Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics...... of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical...... constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems) is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI), University of Southern Denmark. Once specified, the software model has...

  8. A probabilistic model for component-based shape synthesis

    KAUST Repository

    Kalogerakis, Evangelos

    2012-07-01

    We present an approach to synthesizing shapes from complex domains, by identifying new plausible combinations of components from existing shapes. Our primary contribution is a new generative model of component-based shape structure. The model represents probabilistic relationships between properties of shape components, and relates them to learned underlying causes of structural variability within the domain. These causes are treated as latent variables, leading to a compact representation that can be effectively learned without supervision from a set of compatibly segmented shapes. We evaluate the model on a number of shape datasets with complex structural variability and demonstrate its application to amplification of shape databases and to interactive shape synthesis. © 2012 ACM 0730-0301/2012/08-ART55.

  9. Integration of Simulink Models with Component-based Software Models

    Directory of Open Access Journals (Sweden)

    MARIAN, N.

    2008-06-01

    Full Text Available Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behavior as a means of computation, communication and constraints, using computational blocks and aggregates for both discrete and continuous behavior, different interconnection and execution disciplines for event-based and time-based controllers, and so on, to encompass the demands to more functionality, at even lower prices, and with opposite constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI, University of Southern Denmark. Once specified, the software model has to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set of MATLAB/Simulink blocks to COMDES software components, both for continuous and discrete behavior, and the transformation of the software system into the S

  10. Towards a Component Based Model for Database Systems

    Directory of Open Access Journals (Sweden)

    Octavian Paul ROTARU

    2004-02-01

    Full Text Available Due to their effectiveness in the design and development of software applications and due to their recognized advantages in terms of reusability, Component-Based Software Engineering (CBSE concepts have been arousing a great deal of interest in recent years. This paper presents and extends a component-based approach to object-oriented database systems (OODB introduced by us in [1] and [2]. Components are proposed as a new abstraction level for database system, logical partitions of the schema. In this context, the scope is introduced as an escalated property for transactions. Components are studied from the integrity, consistency, and concurrency control perspective. The main benefits of our proposed component model for OODB are the reusability of the database design, including the access statistics required for a proper query optimization, and a smooth information exchange. The integration of crosscutting concerns into the component database model using aspect-oriented techniques is also discussed. One of the main goals is to define a method for the assessment of component composition capabilities. These capabilities are restricted by the component’s interface and measured in terms of adaptability, degree of compose-ability and acceptability level. The above-mentioned metrics are extended from database components to generic software components. This paper extends and consolidates into one common view the ideas previously presented by us in [1, 2, 3].[1] Octavian Paul Rotaru, Marian Dobre, Component Aspects in Object Oriented Databases, Proceedings of the International Conference on Software Engineering Research and Practice (SERP’04, Volume II, ISBN 1-932415-29-7, pages 719-725, Las Vegas, NV, USA, June 2004.[2] Octavian Paul Rotaru, Marian Dobre, Mircea Petrescu, Integrity and Consistency Aspects in Component-Oriented Databases, Proceedings of the International Symposium on Innovation in Information and Communication Technology (ISIICT

  11. A Component Based Approach to Scientific Workflow Management

    CERN Document Server

    Le Goff, Jean-Marie; Baker, Nigel; Brooks, Peter; McClatchey, Richard

    2001-01-01

    CRISTAL is a distributed scientific workflow system used in the manufacturing and production phases of HEP experiment construction at CERN. The CRISTAL project has studied the use of a description driven approach, using meta- modelling techniques, to manage the evolving needs of a large physics community. Interest from such diverse communities as bio-informatics and manufacturing has motivated the CRISTAL team to re-engineer the system to customize functionality according to end user requirements but maximize software reuse in the process. The next generation CRISTAL vision is to build a generic component architecture from which a complete software product line can be generated according to the particular needs of the target enterprise. This paper discusses the issues of adopting a component product line based approach and our experiences of software reuse.

  12. A component based approach to scientific workflow management

    International Nuclear Information System (INIS)

    Baker, N.; Brooks, P.; McClatchey, R.; Kovacs, Z.; LeGoff, J.-M.

    2001-01-01

    CRISTAL is a distributed scientific workflow system used in the manufacturing and production phases of HEP experiment construction at CERN. The CRISTAL project has studied the use of a description driven approach, using meta-modelling techniques, to manage the evolving needs of a large physics community. Interest from such diverse communities as bio-informatics and manufacturing has motivated the CRISTAL team to re-engineer the system to customize functionality according to end user requirements but maximize software reuse in the process. The next generation CRISTAL vision is to build a generic component architecture from which a complete software product line can be generated according to the particular needs of the target enterprise. This paper discusses the issues of adopting a component product line based approach and our experiences of software reuse

  13. Component based modelling of piezoelectric ultrasonic actuators for machining applications

    International Nuclear Information System (INIS)

    Saleem, A; Ahmed, N; Salah, M; Silberschmidt, V V

    2013-01-01

    Ultrasonically Assisted Machining (UAM) is an emerging technology that has been utilized to improve the surface finishing in machining processes such as turning, milling, and drilling. In this context, piezoelectric ultrasonic transducers are being used to vibrate the cutting tip while machining at predetermined amplitude and frequency. However, modelling and simulation of these transducers is a tedious and difficult task. This is due to the inherent nonlinearities associated with smart materials. Therefore, this paper presents a component-based model of ultrasonic transducers that mimics the nonlinear behaviour of such a system. The system is decomposed into components, a mathematical model of each component is created, and the whole system model is accomplished by aggregating the basic components' model. System parameters are identified using Finite Element technique which then has been used to simulate the system in Matlab/SIMULINK. Various operation conditions are tested and performed to demonstrate the system performance

  14. A combined Component-Based Approach for the Design of Distributed Software Systems

    NARCIS (Netherlands)

    Guareis de farias, Cléver; Ferreira Pires, Luis; van Sinderen, Marten J.; Quartel, Dick; Yang, H.; Gupta, S.

    2001-01-01

    Component-based software development enables the construction of software artefacts by assembling binary units of production, distribution and deployment, the so-called components. Several approaches to component-based development have been proposed recently. Most of these approaches are based on

  15. Feedback loops and temporal misalignment in component-based hydrologic modeling

    Science.gov (United States)

    Elag, Mostafa M.; Goodall, Jonathan L.; Castronova, Anthony M.

    2011-12-01

    In component-based modeling, a complex system is represented as a series of loosely integrated components with defined interfaces and data exchanges that allow the components to be coupled together through shared boundary conditions. Although the component-based paradigm is commonly used in software engineering, it has only recently been applied for modeling hydrologic and earth systems. As a result, research is needed to test and verify the applicability of the approach for modeling hydrologic systems. The objective of this work was therefore to investigate two aspects of using component-based software architecture for hydrologic modeling: (1) simulation of feedback loops between components that share a boundary condition and (2) data transfers between temporally misaligned model components. We investigated these topics using a simple case study where diffusion of mass is modeled across a water-sediment interface. We simulated the multimedia system using two model components, one for the water and one for the sediment, coupled using the Open Modeling Interface (OpenMI) standard. The results were compared with a more conventional numerical approach for solving the system where the domain is represented by a single multidimensional array. Results showed that the component-based approach was able to produce the same results obtained with the more conventional numerical approach. When the two components were temporally misaligned, we explored the use of different interpolation schemes to minimize mass balance error within the coupled system. The outcome of this work provides evidence that component-based modeling can be used to simulate complicated feedback loops between systems and guidance as to how different interpolation schemes minimize mass balance error introduced when components are temporally misaligned.

  16. An ontology for component-based models of water resource systems

    Science.gov (United States)

    Elag, Mostafa; Goodall, Jonathan L.

    2013-08-01

    Component-based modeling is an approach for simulating water resource systems where a model is composed of a set of components, each with a defined modeling objective, interlinked through data exchanges. Component-based modeling frameworks are used within the hydrologic, atmospheric, and earth surface dynamics modeling communities. While these efforts have been advancing, it has become clear that the water resources modeling community in particular, and arguably the larger earth science modeling community as well, faces a challenge of fully and precisely defining the metadata for model components. The lack of a unified framework for model component metadata limits interoperability between modeling communities and the reuse of models across modeling frameworks due to ambiguity about the model and its capabilities. To address this need, we propose an ontology for water resources model components that describes core concepts and relationships using the Web Ontology Language (OWL). The ontology that we present, which is termed the Water Resources Component (WRC) ontology, is meant to serve as a starting point that can be refined over time through engagement by the larger community until a robust knowledge framework for water resource model components is achieved. This paper presents the methodology used to arrive at the WRC ontology, the WRC ontology itself, and examples of how the ontology can aid in component-based water resources modeling by (i) assisting in identifying relevant models, (ii) encouraging proper model coupling, and (iii) facilitating interoperability across earth science modeling frameworks.

  17. Integration of Simulink Models with Component-based Software Models

    DEFF Research Database (Denmark)

    Marian, Nicolae; Top, Søren

    2008-01-01

    , communication and constraints, using computational blocks and aggregates for both discrete and continuous behaviour, different interconnection and execution disciplines for event-based and time-based controllers, and so on, to encompass the demands to more functionality, at even lower prices, and with opposite...... to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set...... of MATLAB/Simulink blocks to COMDES software components, both for continuous and discrete behaviour, and the transformation of the software system into the S-functions. The general aim of this work is the improvement of multi-disciplinary development of embedded systems with the focus on the relation...

  18. Refinement and verification in component-based model-driven design

    DEFF Research Database (Denmark)

    Chen, Zhenbang; Liu, Zhiming; Ravn, Anders Peter

    2009-01-01

    Modern software development is complex as it has to deal with many different and yet related aspects of applications. In practical software engineering this is now handled by a UML-like modelling approach in which different aspects are modelled by different notations. Component-based and object-o...... be integrated in computer-aided software engineering (CASE) tools for adding formally supported checking, transformation and generation facilities.......Modern software development is complex as it has to deal with many different and yet related aspects of applications. In practical software engineering this is now handled by a UML-like modelling approach in which different aspects are modelled by different notations. Component-based and object...

  19. A component-based open hypermedia approach to integreting structure services

    DEFF Research Database (Denmark)

    Grønbæk, Kaj; Nürnberg, Peter J.; Bucka-Lassen, Dirk

    1999-01-01

    In this paper, we consider the issue of integrating different structure services within a component-based open hypermedia system. We do so by considering the task of collaborative editing, which calls for a variety of different structures traditionally supplied by different structure services. We...... discuss the nature of collaborative editing and how it can be supported by a combination of spatial and navigational hypermedia services. We then present a component-based open hypermedia system architecture and describe various methods of integrating different structure services provided within...... such an architecture. We show the advantages of integration within a component-based framework over other means of integration, highlighting some of the main advantages of the component-based approach to open hypermedia system design and implementation....

  20. PyCatch: Component based hydrological catchment modelling

    NARCIS (Netherlands)

    Lana-Renault, N.; Karssenberg, D.J.

    2013-01-01

    Dynamic numerical models are powerful tools for representing and studying environmental processes through time. Usually they are constructed with environmental modelling languages, which are high-level programming languages that operate at the level of thinking of the scientists. In this paper we

  1. Design and Application of an Ontology for Component-Based Modeling of Water Systems

    Science.gov (United States)

    Elag, M.; Goodall, J. L.

    2012-12-01

    Many Earth system modeling frameworks have adopted an approach of componentizing models so that a large model can be assembled by linking a set of smaller model components. These model components can then be more easily reused, extended, and maintained by a large group of model developers and end users. While there has been a notable increase in component-based model frameworks in the Earth sciences in recent years, there has been less work on creating framework-agnostic metadata and ontologies for model components. Well defined model component metadata is needed, however, to facilitate sharing, reuse, and interoperability both within and across Earth system modeling frameworks. To address this need, we have designed an ontology for the water resources community named the Water Resources Component (WRC) ontology in order to advance the application of component-based modeling frameworks across water related disciplines. Here we present the design of the WRC ontology and demonstrate its application for integration of model components used in watershed management. First we show how the watershed modeling system Soil and Water Assessment Tool (SWAT) can be decomposed into a set of hydrological and ecological components that adopt the Open Modeling Interface (OpenMI) standard. Then we show how the components can be used to estimate nitrogen losses from land to surface water for the Baltimore Ecosystem study area. Results of this work are (i) a demonstration of how the WRC ontology advances the conceptual integration between components of water related disciplines by handling the semantic and syntactic heterogeneity present when describing components from different disciplines and (ii) an investigation of a methodology by which large models can be decomposed into a set of model components that can be well described by populating metadata according to the WRC ontology.

  2. New component-based normalization method to correct PET system models

    International Nuclear Information System (INIS)

    Kinouchi, Shoko; Miyoshi, Yuji; Suga, Mikio; Yamaya, Taiga; Yoshida, Eiji; Nishikido, Fumihiko; Tashima, Hideaki

    2011-01-01

    Normalization correction is necessary to obtain high-quality reconstructed images in positron emission tomography (PET). There are two basic types of normalization methods: the direct method and component-based methods. The former method suffers from the problem that a huge count number in the blank scan data is required. Therefore, the latter methods have been proposed to obtain high statistical accuracy normalization coefficients with a small count number in the blank scan data. In iterative image reconstruction methods, on the other hand, the quality of the obtained reconstructed images depends on the system modeling accuracy. Therefore, the normalization weighing approach, in which normalization coefficients are directly applied to the system matrix instead of a sinogram, has been proposed. In this paper, we propose a new component-based normalization method to correct system model accuracy. In the proposed method, two components are defined and are calculated iteratively in such a way as to minimize errors of system modeling. To compare the proposed method and the direct method, we applied both methods to our small OpenPET prototype system. We achieved acceptable statistical accuracy of normalization coefficients while reducing the count number of the blank scan data to one-fortieth that required in the direct method. (author)

  3. A Component-Based Modeling and Validation Method for PLC Systems

    Directory of Open Access Journals (Sweden)

    Rui Wang

    2014-05-01

    Full Text Available Programmable logic controllers (PLCs are complex embedded systems that are widely used in industry. This paper presents a component-based modeling and validation method for PLC systems using the behavior-interaction-priority (BIP framework. We designed a general system architecture and a component library for a type of device control system. The control software and hardware of the environment were all modeled as BIP components. System requirements were formalized as monitors. Simulation was carried out to validate the system model. A realistic example from industry of the gates control system was employed to illustrate our strategies. We found a couple of design errors during the simulation, which helped us to improve the dependability of the original systems. The results of experiment demonstrated the effectiveness of our approach.

  4. Hirabayashi, Satoshi; Kroll, Charles N.; Nowak, David J. 2011. Component-based development and sensitivity analyses of an air pollutant dry deposition model. Environmental Modelling & Software. 26(6): 804-816.

    Science.gov (United States)

    Satoshi Hirabayashi; Chuck Kroll; David Nowak

    2011-01-01

    The Urban Forest Effects-Deposition model (UFORE-D) was developed with a component-based modeling approach. Functions of the model were separated into components that are responsible for user interface, data input/output, and core model functions. Taking advantage of the component-based approach, three UFORE-D applications were developed: a base application to estimate...

  5. An adaptive neuro fuzzy model for estimating the reliability of component-based software systems

    Directory of Open Access Journals (Sweden)

    Kirti Tyagi

    2014-01-01

    Full Text Available Although many algorithms and techniques have been developed for estimating the reliability of component-based software systems (CBSSs, much more research is needed. Accurate estimation of the reliability of a CBSS is difficult because it depends on two factors: component reliability and glue code reliability. Moreover, reliability is a real-world phenomenon with many associated real-time problems. Soft computing techniques can help to solve problems whose solutions are uncertain or unpredictable. A number of soft computing approaches for estimating CBSS reliability have been proposed. These techniques learn from the past and capture existing patterns in data. The two basic elements of soft computing are neural networks and fuzzy logic. In this paper, we propose a model for estimating CBSS reliability, known as an adaptive neuro fuzzy inference system (ANFIS, that is based on these two basic elements of soft computing, and we compare its performance with that of a plain FIS (fuzzy inference system for different data sets.

  6. A Component-Based Approach for Securing Indoor Home Care Applications.

    Science.gov (United States)

    Agirre, Aitor; Armentia, Aintzane; Estévez, Elisabet; Marcos, Marga

    2017-12-26

    eHealth systems have adopted recent advances on sensing technologies together with advances in information and communication technologies (ICT) in order to provide people-centered services that improve the quality of life of an increasingly elderly population. As these eHealth services are founded on the acquisition and processing of sensitive data (e.g., personal details, diagnosis, treatments and medical history), any security threat would damage the public's confidence in them. This paper proposes a solution for the design and runtime management of indoor eHealth applications with security requirements. The proposal allows applications definition customized to patient particularities, including the early detection of health deterioration and suitable reaction (events) as well as security needs. At runtime, security support is twofold. A secured component-based platform supervises applications execution and provides events management, whilst the security of the communications among application components is also guaranteed. Additionally, the proposed event management scheme adopts the fog computing paradigm to enable local event related data storage and processing, thus saving communication bandwidth when communicating with the cloud. As a proof of concept, this proposal has been validated through the monitoring of the health status in diabetic patients at a nursing home.

  7. A Component-Based Approach for Securing Indoor Home Care Applications

    Science.gov (United States)

    Estévez, Elisabet

    2017-01-01

    eHealth systems have adopted recent advances on sensing technologies together with advances in information and communication technologies (ICT) in order to provide people-centered services that improve the quality of life of an increasingly elderly population. As these eHealth services are founded on the acquisition and processing of sensitive data (e.g., personal details, diagnosis, treatments and medical history), any security threat would damage the public’s confidence in them. This paper proposes a solution for the design and runtime management of indoor eHealth applications with security requirements. The proposal allows applications definition customized to patient particularities, including the early detection of health deterioration and suitable reaction (events) as well as security needs. At runtime, security support is twofold. A secured component-based platform supervises applications execution and provides events management, whilst the security of the communications among application components is also guaranteed. Additionally, the proposed event management scheme adopts the fog computing paradigm to enable local event related data storage and processing, thus saving communication bandwidth when communicating with the cloud. As a proof of concept, this proposal has been validated through the monitoring of the health status in diabetic patients at a nursing home. PMID:29278370

  8. A Component-Based Approach for Securing Indoor Home Care Applications

    Directory of Open Access Journals (Sweden)

    Aitor Agirre

    2017-12-01

    Full Text Available eHealth systems have adopted recent advances on sensing technologies together with advances in information and communication technologies (ICT in order to provide people-centered services that improve the quality of life of an increasingly elderly population. As these eHealth services are founded on the acquisition and processing of sensitive data (e.g., personal details, diagnosis, treatments and medical history, any security threat would damage the public’s confidence in them. This paper proposes a solution for the design and runtime management of indoor eHealth applications with security requirements. The proposal allows applications definition customized to patient particularities, including the early detection of health deterioration and suitable reaction (events as well as security needs. At runtime, security support is twofold. A secured component-based platform supervises applications execution and provides events management, whilst the security of the communications among application components is also guaranteed. Additionally, the proposed event management scheme adopts the fog computing paradigm to enable local event related data storage and processing, thus saving communication bandwidth when communicating with the cloud. As a proof of concept, this proposal has been validated through the monitoring of the health status in diabetic patients at a nursing home.

  9. Research on development model of nuclear component based on life cycle management

    International Nuclear Information System (INIS)

    Bao Shiyi; Zhou Yu; He Shuyan

    2005-01-01

    At present the development process of nuclear component, even nuclear component itself, is more and more supported by computer technology. This increasing utilization of the computer and software has led to the faster development of nuclear technology on one hand and also brought new problems on the other hand. Especially, the combination of hardware, software and humans has increased nuclear component system complexities to an unprecedented level. To solve this problem, Life Cycle Management technology is adopted in nuclear component system. Hence, an intensive discussion on the development process of a nuclear component is proposed. According to the characteristics of the nuclear component development, such as the complexities and strict safety requirements of the nuclear components, long-term design period, changeable design specifications and requirements, high capital investment, and satisfaction for engineering codes/standards, the development life-cycle model of nuclear component is presented. The development life-cycle model is classified at three levels, namely, component level development life-cycle, sub-component development life-cycle and component level verification/certification life-cycle. The purposes and outcomes of development processes are stated in detailed. A process framework for nuclear component based on system engineering and development environment of nuclear component is discussed for future research work. (authors)

  10. Component-based modeling of systems for automated fault tree generation

    International Nuclear Information System (INIS)

    Majdara, Aref; Wakabayashi, Toshio

    2009-01-01

    One of the challenges in the field of automated fault tree construction is to find an efficient modeling approach that can support modeling of different types of systems without ignoring any necessary details. In this paper, we are going to represent a new system of modeling approach for computer-aided fault tree generation. In this method, every system model is composed of some components and different types of flows propagating through them. Each component has a function table that describes its input-output relations. For the components having different operational states, there is also a state transition table. Each component can communicate with other components in the system only through its inputs and outputs. A trace-back algorithm is proposed that can be applied to the system model to generate the required fault trees. The system modeling approach and the fault tree construction algorithm are applied to a fire sprinkler system and the results are presented

  11. Principal components based support vector regression model for on-line instrument calibration monitoring in NPPs

    International Nuclear Information System (INIS)

    Seo, In Yong; Ha, Bok Nam; Lee, Sung Woo; Shin, Chang Hoon; Kim, Seong Jun

    2010-01-01

    In nuclear power plants (NPPs), periodic sensor calibrations are required to assure that sensors are operating correctly. By checking the sensor's operating status at every fuel outage, faulty sensors may remain undetected for periods of up to 24 months. Moreover, typically, only a few faulty sensors are found to be calibrated. For the safe operation of NPP and the reduction of unnecessary calibration, on-line instrument calibration monitoring is needed. In this study, principal component based auto-associative support vector regression (PCSVR) using response surface methodology (RSM) is proposed for the sensor signal validation of NPPs. This paper describes the design of a PCSVR-based sensor validation system for a power generation system. RSM is employed to determine the optimal values of SVR hyperparameters and is compared to the genetic algorithm (GA). The proposed PCSVR model is confirmed with the actual plant data of Kori Nuclear Power Plant Unit 3 and is compared with the Auto-Associative support vector regression (AASVR) and the auto-associative neural network (AANN) model. The auto-sensitivity of AASVR is improved by around six times by using a PCA, resulting in good detection of sensor drift. Compared to AANN, accuracy and cross-sensitivity are better while the auto-sensitivity is almost the same. Meanwhile, the proposed RSM for the optimization of the PCSVR algorithm performs even better in terms of accuracy, auto-sensitivity, and averaged maximum error, except in averaged RMS error, and this method is much more time efficient compared to the conventional GA method

  12. Surface inspection system for industrial components based on shape from shading minimization approach

    Science.gov (United States)

    Kotan, Muhammed; Öz, Cemil

    2017-12-01

    An inspection system using estimated three-dimensional (3-D) surface characteristics information to detect and classify the faults to increase the quality control on the frequently used industrial components is proposed. Shape from shading (SFS) is one of the basic and classic 3-D shape recovery problems in computer vision. In our application, we developed a system using Frankot and Chellappa SFS method based on the minimization of the selected basis function. First, the specialized image acquisition system captured the images of the component. To eliminate noise, wavelet transform is applied to the taken images. Then, estimated gradients were used to obtain depth and surface profiles. Depth information was used to determine and classify the surface defects. Also, a comparison made with some linearization-based SFS algorithms was discussed. The developed system was applied to real products and the results indicated that using SFS approaches is useful and various types of defects can easily be detected in a short period of time.

  13. ROSMOD: A Toolsuite for Modeling, Generating, Deploying, and Managing Distributed Real-time Component-based Software using ROS

    Directory of Open Access Journals (Sweden)

    Pranav Srinivas Kumar

    2016-09-01

    Full Text Available This paper presents the Robot Operating System Model-driven development tool suite, (ROSMOD an integrated development environment for rapid prototyping component-based software for the Robot Operating System (ROS middleware. ROSMOD is well suited for the design, development and deployment of large-scale distributed applications on embedded devices. We present the various features of ROSMOD including the modeling language, the graphical user interface, code generators, and deployment infrastructure. We demonstrate the utility of this tool with a real-world case study: an Autonomous Ground Support Equipment (AGSE robot that was designed and prototyped using ROSMOD for the NASA Student Launch competition, 2014–2015.

  14. Multiple Scattering Principal Component-based Radiative Transfer Model (PCRTM) from Far IR to UV-Vis

    Science.gov (United States)

    Liu, X.; Wu, W.; Yang, Q.

    2017-12-01

    Modern satellite hyperspectral satellite remote sensors such as AIRS, CrIS, IASI, CLARREO all require accurate and fast radiative transfer models that can deal with multiple scattering of clouds and aerosols to explore the information contents. However, performing full radiative transfer calculations using multiple stream methods such as discrete ordinate (DISORT), doubling and adding (AD), successive order of scattering order of scattering (SOS) are very time consuming. We have developed a principal component-based radiative transfer model (PCRTM) to reduce the computational burden by orders of magnitudes while maintain high accuracy. By exploring spectral correlations, the PCRTM reduce the number of radiative transfer calculations in frequency domain. It further uses a hybrid stream method to decrease the number of calls to the computational expensive multiple scattering calculations with high stream numbers. Other fast parameterizations have been used in the infrared spectral region reduce the computational time to milliseconds for an AIRS forward simulation (2378 spectral channels). The PCRTM has been development to cover spectral range from far IR to UV-Vis. The PCRTM model have been be used for satellite data inversions, proxy data generation, inter-satellite calibrations, spectral fingerprinting, and climate OSSE. We will show examples of applying the PCRTM to single field of view cloudy retrievals of atmospheric temperature, moisture, traces gases, clouds, and surface parameters. We will also show how the PCRTM are used for the NASA CLARREO project.

  15. Modeling Elevation and Aspect Controls on Emerging Ecohydrologic Processes and Ecosystem Patterns Using the Component-based Landlab Framework

    Science.gov (United States)

    Nudurupati, S. S.; Istanbulluoglu, E.; Adams, J. M.; Hobley, D. E. J.; Gasparini, N. M.; Tucker, G. E.; Hutton, E. W. H.

    2014-12-01

    Topography plays a commanding role on the organization of ecohydrologic processes and resulting vegetation patterns. In southwestern United States, climate conditions lead to terrain aspect- and elevation-controlled ecosystems, with mesic north-facing and xeric south-facing vegetation types; and changes in biodiversity as a function of elevation from shrublands in low desert elevations, to mixed grass/shrublands in mid elevations, and forests at high elevations and ridge tops. These observed patterns have been attributed to differences in topography-mediated local soil moisture availability, micro-climatology, and life history processes of plants that control chances of plant establishment and survival. While ecohydrologic models represent local vegetation dynamics in sufficient detail up to sub-hourly time scales, plant life history and competition for space and resources has not been adequately represented in models. In this study we develop an ecohydrologic cellular automata model within the Landlab component-based modeling framework. This model couples local vegetation dynamics (biomass production, death) and plant establishment and competition processes for resources and space. This model is used to study the vegetation organization in a semiarid New Mexico catchment where elevation and hillslope aspect play a defining role on plant types. Processes that lead to observed plant types across the landscape are examined by initializing the domain with randomly assigned plant types and systematically changing model parameters that couple plant response with soil moisture dynamics. Climate perturbation experiments are conducted to examine the plant response in space and time. Understanding the inherently transient ecohydrologic systems is critical to improve predictions of climate change impacts on ecosystems.

  16. Research of component-based software development approach in RFID field%RFID领域软件构件化开发技术研究

    Institute of Scientific and Technical Information of China (English)

    徐孟娟; 杨威

    2012-01-01

    将软件构件化开发技术应用至RFID领域.基于领域工程的分析方法,对RFID领域内变化性需求进行封装、隔离和抽象,分析出RFID体系架构,提炼出RFID软件构件模型。针对构件的管理,研究了RFID构件的分类方法,提出刻面分类法.并详细描述RFID软件构件分类的刻面及每个刻面的术语空间。%Component-based software development approach had been applied in RFID field in this paper. Based on the analysis method of domain engineering, to encapsulate isolate and abstract the demands within the area of RFID , the RFID software component model had been extracted after its architecture was analyzed. For the management of the component, faceted classification method, a detailed description of the space and each facet terms of RFID software component had been given in the paper.

  17. Component-Based Modelling for Scalable Smart City Systems Interoperability: A Case Study on Integrating Energy Demand Response Systems.

    Science.gov (United States)

    Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan

    2016-10-28

    Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems' architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation.

  18. Structural analysis and incipient failure detection of primary circuit components based on correlation-analysis and finite-element models

    International Nuclear Information System (INIS)

    Olma, B.J.

    1977-01-01

    A method is presented to compute vibrational power spectral densities (VPSD's) of primary circuit components based on a finite-element representation of the primary circuit. First this method has been applied to the sodium cooled reactor KNK, Karlsruhe. Now a further application is being developed for a BWR-nuclear power plant. The experimentally determined VPSD's can be considered as the output of a multiple input-output system. They have to be explained as the frequency response of a multidimensional mechanical system, which is excited by stochastic and deterministic mechanical driving forces. The stochastic mechanical forces are generated by the dynamic pressure fluctuations of the fluid. The deterministic mechanical forces are caused by the pressure fluctuations, which are induced by the main coolant pumps or by standing waves. The excitation matrix can be obtained from measured pressure fluctuations. The vibration transfer function matrix can be computed from the mass matrix, damping matrix and stiffness matrix of a theoretical finite-element model or mass-spring model. Based on this theory the computer code 'STAMPO' has been established. This program has been applied to the KNK reactor. The excitation matrix was created from measured jet-noise pressure fluctuations. The mass-, stiffness- and damping matrix has been extracted from a SAP-IV-model of the primary system. Sequentially for each frequency point the complete VPSD matrix has been computed. The diagonal elements of this matrix represent the vibrational auto-power spectral densities, the off-diagonal elements represent the vibrational cross-power spectral densities. The calculations give good agreement with measured VPSD's. The comparison shows that the measured jet-noise pressure fluctuations act nearly uncorrelated on the structure, whereas the output VPSD's are well correlated

  19. Supporting a Component-Based Software Engineering Approach for the Development of Takari Products and Future Command and Control Systems

    National Research Council Canada - National Science Library

    Vernik, R

    1997-01-01

    .... It focuses on concepts and approaches defined as part of the Takari programme and considers issues related to tool support for the development of Takari Capability and Technology Demonstrators (CTDs...

  20. Evaluation of chemical transport model predictions of primary organic aerosol for air masses classified by particle-component-based factor analysis

    OpenAIRE

    C. A. Stroud; M. D. Moran; P. A. Makar; S. Gong; W. Gong; J. Zhang; J. G. Slowik; J. P. D. Abbatt; G. Lu; J. R. Brook; C. Mihele; Q. Li; D. Sills; K. B. Strawbridge; M. L. McGuire

    2012-01-01

    Observations from the 2007 Border Air Quality and Meteorology Study (BAQS-Met 2007) in Southern Ontario, Canada, were used to evaluate predictions of primary organic aerosol (POA) and two other carbonaceous species, black carbon (BC) and carbon monoxide (CO), made for this summertime period by Environment Canada's AURAMS regional chemical transport model. Particle component-based factor analysis was applied to aerosol mass spectrometer measurements made at one urban site (Windsor, ON) and two...

  1. Spatially-Distributed Stream Flow and Nutrient Dynamics Simulations Using the Component-Based AgroEcoSystem-Watershed (AgES-W) Model

    Science.gov (United States)

    Ascough, J. C.; David, O.; Heathman, G. C.; Smith, D. R.; Green, T. R.; Krause, P.; Kipka, H.; Fink, M.

    2010-12-01

    The Object Modeling System 3 (OMS3), currently being developed by the USDA-ARS Agricultural Systems Research Unit and Colorado State University (Fort Collins, CO), provides a component-based environmental modeling framework which allows the implementation of single- or multi-process modules that can be developed and applied as custom-tailored model configurations. OMS3 as a “lightweight” modeling framework contains four primary foundations: modeling resources (e.g., components) annotated with modeling metadata; domain specific knowledge bases and ontologies; tools for calibration, sensitivity analysis, and model optimization; and methods for model integration and performance scalability. The core is able to manage modeling resources and development tools for model and simulation creation, execution, evaluation, and documentation. OMS3 is based on the Java platform but is highly interoperable with C, C++, and FORTRAN on all major operating systems and architectures. The ARS Conservation Effects Assessment Project (CEAP) Watershed Assessment Study (WAS) Project Plan provides detailed descriptions of ongoing research studies at 14 benchmark watersheds in the United States. In order to satisfy the requirements of CEAP WAS Objective 5 (“develop and verify regional watershed models that quantify environmental outcomes of conservation practices in major agricultural regions”), a new watershed model development approach was initiated to take advantage of OMS3 modeling framework capabilities. Specific objectives of this study were to: 1) disaggregate and refactor various agroecosystem models (e.g., J2K-S, SWAT, WEPP) and implement hydrological, N dynamics, and crop growth science components under OMS3, 2) assemble a new modular watershed scale model for fully-distributed transfer of water and N loading between land units and stream channels, and 3) evaluate the accuracy and applicability of the modular watershed model for estimating stream flow and N dynamics. The

  2. Computer-aided process planning in prismatic shape die components based on Standard for the Exchange of Product model data

    Directory of Open Access Journals (Sweden)

    Awais Ahmad Khan

    2015-11-01

    Full Text Available Insufficient technologies made good integration between the die components in design, process planning, and manufacturing impossible in the past few years. Nowadays, the advanced technologies based on Standard for the Exchange of Product model data are making it possible. This article discusses the three main steps for achieving the complete process planning for prismatic parts of the die components. These three steps are data extraction, feature recognition, and process planning. The proposed computer-aided process planning system works as part of an integrated system to cover the process planning of any prismatic part die component. The system is built using Visual Basic with EWDraw system for visualizing the Standard for the Exchange of Product model data file. The system works successfully and can cover any type of sheet metal die components. The case study discussed in this article is taken from a large design of progressive die.

  3. Identification and analysis of labor productivity components based on ACHIEVE model (case study: staff of Kermanshah University of Medical Sciences).

    Science.gov (United States)

    Ziapour, Arash; Khatony, Alireza; Kianipour, Neda; Jafary, Faranak

    2014-12-15

    Identification and analysis of the components of labor productivity based on ACHIEVE model was performed among employees in different parts of Kermanshah University of Medical Sciences in 2014. This was a descriptive correlational study in which the population consisted of 270 working personnel in different administrative groups (contractual, fixed- term and regular) at Kermanshah University of Medical Sciences (872 people) that were selected among 872 people through stratified random sampling method based on Krejcie and Morgan sampling table. The survey tool included labor productivity questionnaire of ACHIEVE. Questionnaires were confirmed in terms of content and face validity, and their reliability was calculated using Cronbach's alpha coefficient. The data were analyzed by SPSS-18 software using descriptive and inferential statistics. The mean scores for labor productivity dimensions of the employees, including environment (environmental fit), evaluation (training and performance feedback), validity (valid and legal exercise of personnel), incentive (motivation or desire), help (organizational support), clarity (role perception or understanding), ability (knowledge and skills) variables and total labor productivity were 4.10±0.630, 3.99±0.568, 3.97±0.607, 3.76±0.701, 3.63±0.746, 3.59±0.777, 3.49±0.882 and 26.54±4.347, respectively. Also, the results indicated that the seven factors of environment, performance assessment, validity, motivation, organizational support, clarity, and ability were effective in increasing labor productivity. The analysis of the current status of university staff in the employees' viewpoint suggested that the two factors of environment and evaluation, which had the greatest impact on labor productivity in the viewpoint of the staff, were in a favorable condition and needed to be further taken into consideration by authorities.

  4. Identification and Analysis of Labor Productivity Components Based on ACHIEVE Model (Case Study: Staff of Kermanshah University of Medical Sciences)

    Science.gov (United States)

    Ziapour, Arash; Khatony, Alireza; Kianipour, Neda; Jafary, Faranak

    2015-01-01

    Identification and analysis of the components of labor productivity based on ACHIEVE model was performed among employees in different parts of Kermanshah University of Medical Sciences in 2014. This was a descriptive correlational study in which the population consisted of 270 working personnel in different administrative groups (contractual, fixed- term and regular) at Kermanshah University of Medical Sciences (872 people) that were selected among 872 people through stratified random sampling method based on Krejcie and Morgan sampling table. The survey tool included labor productivity questionnaire of ACHIEVE. Questionnaires were confirmed in terms of content and face validity, and their reliability was calculated using Cronbach’s alpha coefficient. The data were analyzed by SPSS-18 software using descriptive and inferential statistics. The mean scores for labor productivity dimensions of the employees, including environment (environmental fit), evaluation (training and performance feedback), validity (valid and legal exercise of personnel), incentive (motivation or desire), help (organizational support), clarity (role perception or understanding), ability (knowledge and skills) variables and total labor productivity were 4.10±0.630, 3.99±0.568, 3.97±0.607, 3.76±0.701, 3.63±0.746, 3.59±0.777, 3.49±0.882 and 26.54±4.347, respectively. Also, the results indicated that the seven factors of environment, performance assessment, validity, motivation, organizational support, clarity, and ability were effective in increasing labor productivity. The analysis of the current status of university staff in the employees’ viewpoint suggested that the two factors of environment and evaluation, which had the greatest impact on labor productivity in the viewpoint of the staff, were in a favorable condition and needed to be further taken into consideration by authorities. PMID:25560364

  5. Secure wireless embedded systems via component-based design

    DEFF Research Database (Denmark)

    Hjorth, T.; Torbensen, R.

    2010-01-01

    This paper introduces the method secure-by-design as a way of constructing wireless embedded systems using component-based modeling frameworks. This facilitates design of secure applications through verified, reusable software. Following this method we propose a security framework with a secure c......, with full support for confidentiality, authentication, and integrity using keypairs. The approach has been demonstrated in a multi-platform home automation prototype that can remotely unlock a door using a PDA over the Internet....

  6. The artifacts of component-based development

    International Nuclear Information System (INIS)

    Rizwan, M.; Qureshi, J.; Hayat, S.A.

    2007-01-01

    Component based development idea was floated in a conference name Mass Produced Software Components in 1968 (1). Since then engineering and scientific libraries are developed to reuse the previously developed functions. This concept is now widely used in SW development as component based development (CBD). Component-based software engineering (CBSE) is used to develop/ assemble software from existing components (2). Software developed using components is called component where (3). This paper presents different architectures of CBD such as Active X, common object request broker architecture (CORBA), remote method invocation (RMI) and simple object access protocol (SOAP). The overall objective of this paper is to support the practice of CBD by comparing its advantages and disadvantages. This paper also evaluates object oriented process model to adapt it for CBD. (author)

  7. Component-Based Cartoon Face Generation

    Directory of Open Access Journals (Sweden)

    Saman Sepehri Nejad

    2016-11-01

    Full Text Available In this paper, we present a cartoon face generation method that stands on a component-based facial feature extraction approach. Given a frontal face image as an input, our proposed system has the following stages. First, face features are extracted using an extended Active Shape Model. Outlines of the components are locally modified using edge detection, template matching and Hermit interpolation. This modification enhances the diversity of output and accuracy of the component matching required for cartoon generation. Second, to bring cartoon-specific features such as shadows, highlights and, especially, stylish drawing, an array of various face photographs and corresponding hand-drawn cartoon faces are collected. These cartoon templates are automatically decomposed into cartoon components using our proposed method for parameterizing cartoon samples, which is fast and simple. Then, using shape matching methods, the appropriate cartoon component is selected and deformed to fit the input face. Finally, a cartoon face is rendered in a vector format using the rendering rules of the selected template. Experimental results demonstrate effectiveness of our approach in generating life-like cartoon faces.

  8. Evaluation of chemical transport model predictions of primary organic aerosol for air masses classified by particle component-based factor analysis

    Directory of Open Access Journals (Sweden)

    C. A. Stroud

    2012-09-01

    Full Text Available Observations from the 2007 Border Air Quality and Meteorology Study (BAQS-Met 2007 in Southern Ontario, Canada, were used to evaluate predictions of primary organic aerosol (POA and two other carbonaceous species, black carbon (BC and carbon monoxide (CO, made for this summertime period by Environment Canada's AURAMS regional chemical transport model. Particle component-based factor analysis was applied to aerosol mass spectrometer measurements made at one urban site (Windsor, ON and two rural sites (Harrow and Bear Creek, ON to derive hydrocarbon-like organic aerosol (HOA factors. A novel diagnostic model evaluation was performed by investigating model POA bias as a function of HOA mass concentration and indicator ratios (e.g. BC/HOA. Eight case studies were selected based on factor analysis and back trajectories to help classify model bias for certain POA source types. By considering model POA bias in relation to co-located BC and CO biases, a plausible story is developed that explains the model biases for all three species.

    At the rural sites, daytime mean PM1 POA mass concentrations were under-predicted compared to observed HOA concentrations. POA under-predictions were accentuated when the transport arriving at the rural sites was from the Detroit/Windsor urban complex and for short-term periods of biomass burning influence. Interestingly, the daytime CO concentrations were only slightly under-predicted at both rural sites, whereas CO was over-predicted at the urban Windsor site with a normalized mean bias of 134%, while good agreement was observed at Windsor for the comparison of daytime PM1 POA and HOA mean values, 1.1 μg m−3 and 1.2 μg m−3, respectively. Biases in model POA predictions also trended from positive to negative with increasing HOA values. Periods of POA over-prediction were most evident at the urban site on calm nights due to an overly-stable model surface layer

  9. Microservices as an Evolutionary Architecture of Component-Based Development: A Think-aloud Study

    OpenAIRE

    Parizi, Reza M.

    2018-01-01

    Microservices become a fast growing and popular architectural style based on service-oriented development. One of the major advantages using component-based approaches is to support reuse. In this paper, we present a study of microservices and how these systems are related to the traditional abstract models of component-based systems. This research focuses on the core properties of microservices including their scalability, availability and resilience, consistency, coupling and cohesion, and ...

  10. Structured Performance Analysis for Component Based Systems

    OpenAIRE

    Salmi , N.; Moreaux , Patrice; Ioualalen , M.

    2012-01-01

    International audience; The Component Based System (CBS) paradigm is now largely used to design software systems. In addition, performance and behavioural analysis remains a required step for the design and the construction of efficient systems. This is especially the case of CBS, which involve interconnected components running concurrent processes. % This paper proposes a compositional method for modeling and structured performance analysis of CBS. Modeling is based on Stochastic Well-formed...

  11. Redundancy allocation problem of a system with increasing failure rates of components based on Weibull distribution: A simulation-based optimization approach

    International Nuclear Information System (INIS)

    Guilani, Pedram Pourkarim; Azimi, Parham; Niaki, S.T.A.; Niaki, Seyed Armin Akhavan

    2016-01-01

    The redundancy allocation problem (RAP) is a useful method to enhance system reliability. In most works involving RAP, failure rates of the system components are assumed to follow either exponential or k-Erlang distributions. In real world problems however, many systems have components with increasing failure rates. This indicates that as time passes by, the failure rates of the system components increase in comparison to their initial failure rates. In this paper, the redundancy allocation problem of a series–parallel system with components having an increasing failure rate based on Weibull distribution is investigated. An optimization method via simulation is proposed for modeling and a genetic algorithm is developed to solve the problem. - Highlights: • The redundancy allocation problem of a series–parallel system is aimed. • Components possess an increasing failure rate based on Weibull distribution. • An optimization method via simulation is proposed for modeling. • A genetic algorithm is developed to solve the problem.

  12. Component-Based Software Engineering and Runtime Type Definition

    OpenAIRE

    A. R. Shakurov

    2011-01-01

    The component-based approach to software engineering, its current implementations and their limitations are discussed. A new extended architecture for such systems is presented. Its main architectural concepts and principles are considered.

  13. Component-Based Development of Runtime Observers in the COMDES Framework

    DEFF Research Database (Denmark)

    Guan, Wei; Li, Gang; Angelov, Christo K.

    2013-01-01

    against formally specified properties. This paper presents a component-based design method for runtime observers in the context of COMDES framework—a component-based framework for distributed embedded system and its supporting tools. Therefore, runtime verification is facilitated by model......Formal verification methods, such as exhaustive model checking, are often infeasible because of high computational complexity. Runtime observers (monitors) provide an alternative, light-weight verification method, which offers a non-exhaustive but still feasible approach to monitor system behavior...

  14. Robust LOD scores for variance component-based linkage analysis.

    Science.gov (United States)

    Blangero, J; Williams, J T; Almasy, L

    2000-01-01

    The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.

  15. A component-based groupware development methodology

    NARCIS (Netherlands)

    Guareis de farias, Cléver; Ferreira Pires, Luis; van Sinderen, Marten J.

    2000-01-01

    Software development in general and groupware applications in particular can greatly benefit from the reusability and interoperability aspects associated with software components. Component-based software development enables the construction of software artefacts by assembling prefabricated,

  16. An Intuitionistic Fuzzy Methodology for Component-Based Software Reliability Optimization

    DEFF Research Database (Denmark)

    Madsen, Henrik; Grigore, Albeanu; Popenţiuvlǎdicescu, Florin

    2012-01-01

    Component-based software development is the current methodology facilitating agility in project management, software reuse in design and implementation, promoting quality and productivity, and increasing the reliability and performability. This paper illustrates the usage of intuitionistic fuzzy...... degree approach in modelling the quality of entities in imprecise software reliability computing in order to optimize management results. Intuitionistic fuzzy optimization algorithms are proposed to be used for complex software systems reliability optimization under various constraints....

  17. HEDR modeling approach

    International Nuclear Information System (INIS)

    Shipler, D.B.; Napier, B.A.

    1992-07-01

    This report details the conceptual approaches to be used in calculating radiation doses to individuals throughout the various periods of operations at the Hanford Site. The report considers the major environmental transport pathways--atmospheric, surface water, and ground water--and projects and appropriate modeling technique for each. The modeling sequence chosen for each pathway depends on the available data on doses, the degree of confidence justified by such existing data, and the level of sophistication deemed appropriate for the particular pathway and time period being considered

  18. Probabilistic fatigue life prediction methodology for notched components based on simple smooth fatigue tests

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Z. R.; Li, Z. X. [Dept.of Engineering Mechanics, Jiangsu Key Laboratory of Engineering Mechanics, Southeast University, Nanjing (China); Hu, X. T.; Xin, P. P.; Song, Y. D. [State Key Laboratory of Mechanics and Control of Mechanical Structures, Nanjing University of Aeronautics and Astronautics, Nanjing (China)

    2017-01-15

    The methodology of probabilistic fatigue life prediction for notched components based on smooth specimens is presented. Weakestlink theory incorporating Walker strain model has been utilized in this approach. The effects of stress ratio and stress gradient have been considered. Weibull distribution and median rank estimator are used to describe fatigue statistics. Fatigue tests under different stress ratios were conducted on smooth and notched specimens of titanium alloy TC-1-1. The proposed procedures were checked against the test data of TC-1-1 notched specimens. Prediction results of 50 % survival rate are all within a factor of two scatter band of the test results.

  19. Uniframe: A Unified Framework for Developing Service-Oriented, Component-Based Distributed Software Systems

    National Research Council Canada - National Science Library

    Raje, Rajeev R; Olson, Andrew M; Bryant, Barrett R; Burt, Carol C; Auguston, Makhail

    2005-01-01

    .... It describes how this approach employs a unifying framework for specifying such systems to unite the concepts of service-oriented architectures, a component-based software engineering methodology...

  20. Verifying Embedded Systems using Component-based Runtime Observers

    DEFF Research Database (Denmark)

    Guan, Wei; Marian, Nicolae; Angelov, Christo K.

    against formally specified properties. This paper presents a component-based design method for runtime observers, which are configured from instances of prefabricated reusable components---Predicate Evaluator (PE) and Temporal Evaluator (TE). The PE computes atomic propositions for the TE; the latter...... is a reconfigurable component processing a data structure, representing the state transition diagram of a non-deterministic state machine, i.e. a Buchi automaton derived from a system property specified in Linear Temporal Logic (LTL). Observer components have been implemented using design models and design patterns...

  1. The development of component-based information systems

    CERN Document Server

    Cesare, Sergio de; Macredie, Robert

    2015-01-01

    This work provides a comprehensive overview of research and practical issues relating to component-based development information systems (CBIS). Spanning the organizational, developmental, and technical aspects of the subject, the original research included here provides fresh insights into successful CBIS technology and application. Part I covers component-based development methodologies and system architectures. Part II analyzes different aspects of managing component-based development. Part III investigates component-based development versus commercial off-the-shelf products (COTS), includi

  2. Material Modelling - Composite Approach

    DEFF Research Database (Denmark)

    Nielsen, Lauge Fuglsang

    1997-01-01

    is successfully justified comparing predicted results with experimental data obtained in the HETEK-project on creep, relaxation, and shrinkage of very young concretes cured at a temperature of T = 20^o C and a relative humidity of RH = 100%. The model is also justified comparing predicted creep, shrinkage......, and internal stresses caused by drying shrinkage with experimental results reported in the literature on the mechanical behavior of mature concretes. It is then concluded that the model presented applied in general with respect to age at loading.From a stress analysis point of view the most important finding...... in this report is that cement paste and concrete behave practically as linear-viscoelastic materials from an age of approximately 10 hours. This is a significant age extension relative to earlier studies in the literature where linear-viscoelastic behavior is only demonstrated from ages of a few days. Thus...

  3. Structural health monitoring of power plant components based on a local temperature measurement concept

    International Nuclear Information System (INIS)

    Rudolph, Juergen; Bergholz, S.; Hilpert, R.; Jouan, B.; Goetz, A.

    2012-01-01

    The fatigue assessment of power plant components based on fatigue monitoring approaches is an essential part of the integrity concept and modern lifetime management. It is comparable to structural health monitoring approaches in other engineering fields. The methods of fatigue evaluation of nuclear power plant components based on realistic thermal load data measured on the plant are addressed. In this context the Fast Fatigue Evaluation (FFE) and Detailed Fatigue Calculation (DFC) of nuclear power plant components are parts of the three staged approach to lifetime assessment and lifetime management of the AREVA Fatigue Concept (AFC). The three stages Simplified Fatigue Estimation (SFE), Fast Fatigue Evaluation (FFE) and Detailed Fatigue Calculation (DFC) are characterized by increasing calculation effort and decreasing degree of conservatism. Their application is case dependent. The quality of the fatigue lifetime assessment essentially depends on one hand on the fatigue model assumptions and on the other hand on the load data as the basic input. In the case of nuclear power plant components thermal transient loading is most fatigue relevant. Usual global fatigue monitoring approaches rely on measured data from plant instrumentation. As an extension, the application of a local fatigue monitoring strategy (to be described in detail within the scope of this paper) paves the way of delivering continuously (nowadays at a frequency of 1 Hz) realistic load data at the fatigue relevant locations. Methods of qualified processing of these data are discussed in detail. Particularly, the processing of arbitrary operational load sequences and the derivation of representative model transients is discussed. This approach related to realistic load-time histories is principally applicable for all fatigue relevant components and ensures a realistic fatigue evaluation. (orig.)

  4. A refinement driven component-based design

    DEFF Research Database (Denmark)

    Chen, Zhenbang; Liu, Zhiming; Ravn, Anders Peter

    2007-01-01

    the work on the Common Component Modelling Example (CoCoME). This gives evidence that the formal techniques developed in rCOS can be integrated into a model-driven development process and shows where it may be integrated in computer-aided software engineering (CASE) tools for adding formally supported...

  5. Component-Based Approach for Educating Students in Bioinformatics

    Science.gov (United States)

    Poe, D.; Venkatraman, N.; Hansen, C.; Singh, G.

    2009-01-01

    There is an increasing need for an effective method of teaching bioinformatics. Increased progress and availability of computer-based tools for educating students have led to the implementation of a computer-based system for teaching bioinformatics as described in this paper. Bioinformatics is a recent, hybrid field of study combining elements of…

  6. Component-based framework for subsurface simulations

    International Nuclear Information System (INIS)

    Palmer, B J; Fang, Yilin; Hammond, Glenn; Gurumoorthi, Vidhya

    2007-01-01

    Simulations in the subsurface environment represent a broad range of phenomena covering an equally broad range of scales. Developing modelling capabilities that can integrate models representing different phenomena acting at different scales present formidable challenges both from the algorithmic and computer science perspective. This paper will describe the development of an integrated framework that will be used to combine different models into a single simulation. Initial work has focused on creating two frameworks, one for performing smooth particle hydrodynamics (SPH) simulations of fluid systems, the other for performing grid-based continuum simulations of reactive subsurface flow. The SPH framework is based on a parallel code developed for doing pore scale simulations, the continuum grid-based framework is based on the STOMP (Subsurface Transport Over Multiple Phases) code developed at PNNL Future work will focus on combining the frameworks together to perform multiscale, multiphysics simulations of reactive subsurface flow

  7. The Component-Based Application for GAMESS

    Energy Technology Data Exchange (ETDEWEB)

    Peng, Fang [Iowa State Univ., Ames, IA (United States)

    2007-01-01

    GAMESS, a quantum chetnistry program for electronic structure calculations, has been freely shared by high-performance application scientists for over twenty years. It provides a rich set of functionalities and can be run on a variety of parallel platforms through a distributed data interface. While a chemistry computation is sophisticated and hard to develop, the resource sharing among different chemistry packages will accelerate the development of new computations and encourage the cooperation of scientists from universities and laboratories. Common Component Architecture (CCA) offers an enviromnent that allows scientific packages to dynamically interact with each other through components, which enable dynamic coupling of GAMESS with other chetnistry packages, such as MPQC and NWChem. Conceptually, a cotnputation can be constructed with "plug-and-play" components from scientific packages and require more than componentizing functions/subroutines of interest, especially for large-scale scientific packages with a long development history. In this research, we present our efforts to construct cotnponents for GAMESS that conform to the CCA specification. The goal is to enable the fine-grained interoperability between three quantum chemistry programs, GAMESS, MPQC and NWChem, via components. We focus on one of the three packages, GAMESS; delineate the structure of GAMESS computations, followed by our approaches to its component development. Then we use GAMESS as the driver to interoperate integral components from the other tw"o packages, arid show the solutions for interoperability problems along with preliminary results. To justify the versatility of the design, the Tuning and Analysis Utility (TAU) components have been coupled with GAMESS and its components, so that the performance of GAMESS and its components may be analyzed for a wide range of systetn parameters.

  8. Evaporator modeling - A hybrid approach

    International Nuclear Information System (INIS)

    Ding Xudong; Cai Wenjian; Jia Lei; Wen Changyun

    2009-01-01

    In this paper, a hybrid modeling approach is proposed to model two-phase flow evaporators. The main procedures for hybrid modeling includes: (1) Based on the energy and material balance, and thermodynamic principles to formulate the process fundamental governing equations; (2) Select input/output (I/O) variables responsible to the system performance which can be measured and controlled; (3) Represent those variables existing in the original equations but are not measurable as simple functions of selected I/Os or constants; (4) Obtaining a single equation which can correlate system inputs and outputs; and (5) Identify unknown parameters by linear or nonlinear least-squares methods. The method takes advantages of both physical and empirical modeling approaches and can accurately predict performance in wide operating range and in real-time, which can significantly reduce the computational burden and increase the prediction accuracy. The model is verified with the experimental data taken from a testing system. The testing results show that the proposed model can predict accurately the performance of the real-time operating evaporator with the maximum error of ±8%. The developed models will have wide applications in operational optimization, performance assessment, fault detection and diagnosis

  9. Flexible optical network components based on densely integrated microring resonators

    NARCIS (Netherlands)

    Geuzebroek, D.H.

    2005-01-01

    This thesis addresses the design, realization and characterization of reconfigurable optical network components based on multiple microring resonators. Since thermally tunable microring resonators can be used as wavelength selective space switches, very compact devices with high complexity and

  10. HEDR modeling approach: Revision 1

    International Nuclear Information System (INIS)

    Shipler, D.B.; Napier, B.A.

    1994-05-01

    This report is a revision of the previous Hanford Environmental Dose Reconstruction (HEDR) Project modeling approach report. This revised report describes the methods used in performing scoping studies and estimating final radiation doses to real and representative individuals who lived in the vicinity of the Hanford Site. The scoping studies and dose estimates pertain to various environmental pathways during various periods of time. The original report discussed the concepts under consideration in 1991. The methods for estimating dose have been refined as understanding of existing data, the scope of pathways, and the magnitudes of dose estimates were evaluated through scoping studies

  11. Built-In Data-Flow Integration Testing in Large-Scale Component-Based Systems

    Science.gov (United States)

    Piel, Éric; Gonzalez-Sanchez, Alberto; Gross, Hans-Gerhard

    Modern large-scale component-based applications and service ecosystems are built following a number of different component models and architectural styles, such as the data-flow architectural style. In this style, each building block receives data from a previous one in the flow and sends output data to other components. This organisation expresses information flows adequately, and also favours decoupling between the components, leading to easier maintenance and quicker evolution of the system. Integration testing is a major means to ensure the quality of large systems. Their size and complexity, together with the fact that they are developed and maintained by several stake holders, make Built-In Testing (BIT) an attractive approach to manage their integration testing. However, so far no technique has been proposed that combines BIT and data-flow integration testing. We have introduced the notion of a virtual component in order to realize such a combination. It permits to define the behaviour of several components assembled to process a flow of data, using BIT. Test-cases are defined in a way that they are simple to write and flexible to adapt. We present two implementations of our proposed virtual component integration testing technique, and we extend our previous proposal to detect and handle errors in the definition by the user. The evaluation of the virtual component testing approach suggests that more issues can be detected in systems with data-flows than through other integration testing approaches.

  12. A Component-Based Study of the Effect of Diameter on Bond and Anchorage Characteristics of Blind-Bolted Connections.

    Directory of Open Access Journals (Sweden)

    Muhammad Nasir Amin

    Full Text Available Structural hollow sections are gaining worldwide importance due to their structural and architectural advantages over open steel sections. The only obstacle to their use is their connection with other structural members. To overcome the obstacle of tightening the bolt from one side has given birth to the concept of blind bolts. Blind bolts, being the practical solution to the connection hindrance for the use of hollow and concrete filled hollow sections play a vital role. Flowdrill, the Huck High Strength Blind Bolt and the Lindapter Hollobolt are the well-known commercially available blind bolts. Although the development of blind bolts has largely resolved this issue, the use of structural hollow sections remains limited to shear resistance. Therefore, a new modified version of the blind bolt, known as the "Extended Hollo-Bolt" (EHB due to its enhanced capacity for bonding with concrete, can overcome the issue of low moment resistance capacity associated with blind-bolted connections. The load transfer mechanism of this recently developed blind bolt remains unclear, however. This study uses a parametric approach to characterising the EHB, using diameter as the variable parameter. Stiffness and load-carrying capacity were evaluated at two different bolt sizes. To investigate the load transfer mechanism, a component-based study of the bond and anchorage characteristics was performed by breaking down the EHB into its components. The results of the study provide insight into the load transfer mechanism of the blind bolt in question. The proposed component-based model was validated by a spring model, through which the stiffness of the EHB was compared to that of its components combined. The combined stiffness of the components was found to be roughly equivalent to that of the EHB as a whole, validating the use of this component-based approach.

  13. A Component-Based Study of the Effect of Diameter on Bond and Anchorage Characteristics of Blind-Bolted Connections.

    Science.gov (United States)

    Amin, Muhammad Nasir; Zaheer, Salman; Alazba, Abdulrahman Ali; Saleem, Muhammad Umair; Niazi, Muhammad Umar Khan; Khurram, Nauman; Amin, Muhammad Tahir

    2016-01-01

    Structural hollow sections are gaining worldwide importance due to their structural and architectural advantages over open steel sections. The only obstacle to their use is their connection with other structural members. To overcome the obstacle of tightening the bolt from one side has given birth to the concept of blind bolts. Blind bolts, being the practical solution to the connection hindrance for the use of hollow and concrete filled hollow sections play a vital role. Flowdrill, the Huck High Strength Blind Bolt and the Lindapter Hollobolt are the well-known commercially available blind bolts. Although the development of blind bolts has largely resolved this issue, the use of structural hollow sections remains limited to shear resistance. Therefore, a new modified version of the blind bolt, known as the "Extended Hollo-Bolt" (EHB) due to its enhanced capacity for bonding with concrete, can overcome the issue of low moment resistance capacity associated with blind-bolted connections. The load transfer mechanism of this recently developed blind bolt remains unclear, however. This study uses a parametric approach to characterising the EHB, using diameter as the variable parameter. Stiffness and load-carrying capacity were evaluated at two different bolt sizes. To investigate the load transfer mechanism, a component-based study of the bond and anchorage characteristics was performed by breaking down the EHB into its components. The results of the study provide insight into the load transfer mechanism of the blind bolt in question. The proposed component-based model was validated by a spring model, through which the stiffness of the EHB was compared to that of its components combined. The combined stiffness of the components was found to be roughly equivalent to that of the EHB as a whole, validating the use of this component-based approach.

  14. Modified multiblock partial least squares path modeling algorithm with backpropagation neural networks approach

    Science.gov (United States)

    Yuniarto, Budi; Kurniawan, Robert

    2017-03-01

    PLS Path Modeling (PLS-PM) is different from covariance based SEM, where PLS-PM use an approach based on variance or component, therefore, PLS-PM is also known as a component based SEM. Multiblock Partial Least Squares (MBPLS) is a method in PLS regression which can be used in PLS Path Modeling which known as Multiblock PLS Path Modeling (MBPLS-PM). This method uses an iterative procedure in its algorithm. This research aims to modify MBPLS-PM with Back Propagation Neural Network approach. The result is MBPLS-PM algorithm can be modified using the Back Propagation Neural Network approach to replace the iterative process in backward and forward step to get the matrix t and the matrix u in the algorithm. By modifying the MBPLS-PM algorithm using Back Propagation Neural Network approach, the model parameters obtained are relatively not significantly different compared to model parameters obtained by original MBPLS-PM algorithm.

  15. A probabilistic model for component-based shape synthesis

    KAUST Repository

    Kalogerakis, Evangelos; Chaudhuri, Siddhartha; Koller, Daphne; Koltun, Vladlen

    2012-01-01

    represents probabilistic relationships between properties of shape components, and relates them to learned underlying causes of structural variability within the domain. These causes are treated as latent variables, leading to a compact representation

  16. Management of Globally Distributed Component-Based Software Development Projects

    NARCIS (Netherlands)

    J. Kotlarsky (Julia)

    2005-01-01

    textabstractGlobally Distributed Component-Based Development (GD CBD) is expected to become a promising area, as increasing numbers of companies are setting up software development in a globally distributed environment and at the same time are adopting CBD methodologies. Being an emerging area, the

  17. Monitoring extensions for component-based distributed software

    NARCIS (Netherlands)

    Diakov, N.K.; Papir, Z.; van Sinderen, Marten J.; Quartel, Dick

    2000-01-01

    This paper defines a generic class of monitoring extensions to component-based distributed enterprise software. Introducing a monitoring extension to a legacy application system can be very costly. In this paper, we identify the minimum support for application monitoring within the generic

  18. Component-based development of software language engineering tools

    NARCIS (Netherlands)

    Ssanyu, J.; Hemerik, C.

    2011-01-01

    In this paper we outline how Software Language Engineering (SLE) could benefit from Component-based Software Development (CBSD) techniques and present an architecture aimed at developing a coherent set of lightweight SLE components, fitting into a general-purpose component framework. In order to

  19. Component-based development process and component lifecycle

    NARCIS (Netherlands)

    Crnkovic, I.; Chaudron, M.R.V.; Larsson, S.

    2006-01-01

    The process of component- and component-based system development differs in many significant ways from the "classical" development process of software systems. The main difference is in the separation of the development process of components from the development process of systems. This fact has a

  20. A Bayesian approach to model uncertainty

    International Nuclear Information System (INIS)

    Buslik, A.

    1994-01-01

    A Bayesian approach to model uncertainty is taken. For the case of a finite number of alternative models, the model uncertainty is equivalent to parameter uncertainty. A derivation based on Savage's partition problem is given

  1. System Behavior Models: A Survey of Approaches

    Science.gov (United States)

    2016-06-01

    OF FIGURES Spiral Model .................................................................................................3 Figure 1. Approaches in... spiral model was chosen for researching and structuring this thesis, shown in Figure 1. This approach allowed multiple iterations of source material...applications and refining through iteration. 3 Spiral Model Figure 1. D. SCOPE The research is limited to a literature review, limited

  2. Learning Actions Models: Qualitative Approach

    DEFF Research Database (Denmark)

    Bolander, Thomas; Gierasimczuk, Nina

    2015-01-01

    In dynamic epistemic logic, actions are described using action models. In this paper we introduce a framework for studying learnability of action models from observations. We present first results concerning propositional action models. First we check two basic learnability criteria: finite ident...

  3. Component-based software for high-performance scientific computing

    Energy Technology Data Exchange (ETDEWEB)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

  4. Component-based software for high-performance scientific computing

    International Nuclear Information System (INIS)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly

  5. Global energy modeling - A biophysical approach

    Energy Technology Data Exchange (ETDEWEB)

    Dale, Michael

    2010-09-15

    This paper contrasts the standard economic approach to energy modelling with energy models using a biophysical approach. Neither of these approaches includes changing energy-returns-on-investment (EROI) due to declining resource quality or the capital intensive nature of renewable energy sources. Both of these factors will become increasingly important in the future. An extension to the biophysical approach is outlined which encompasses a dynamic EROI function that explicitly incorporates technological learning. The model is used to explore several scenarios of long-term future energy supply especially concerning the global transition to renewable energy sources in the quest for a sustainable energy system.

  6. A Unified Approach to Modeling and Programming

    DEFF Research Database (Denmark)

    Madsen, Ole Lehrmann; Møller-Pedersen, Birger

    2010-01-01

    of this paper is to go back to the future and get inspiration from SIMULA and propose a unied approach. In addition to reintroducing the contributions of SIMULA and the Scandinavian approach to object-oriented programming, we do this by discussing a number of issues in modeling and programming and argue3 why we......SIMULA was a language for modeling and programming and provided a unied approach to modeling and programming in contrast to methodologies based on structured analysis and design. The current development seems to be going in the direction of separation of modeling and programming. The goal...

  7. Multiple Model Approaches to Modelling and Control,

    DEFF Research Database (Denmark)

    on the ease with which prior knowledge can be incorporated. It is interesting to note that researchers in Control Theory, Neural Networks,Statistics, Artificial Intelligence and Fuzzy Logic have more or less independently developed very similar modelling methods, calling them Local ModelNetworks, Operating......, and allows direct incorporation of high-level and qualitative plant knowledge into themodel. These advantages have proven to be very appealing for industrial applications, and the practical, intuitively appealing nature of the framework isdemonstrated in chapters describing applications of local methods...... to problems in the process industries, biomedical applications and autonomoussystems. The successful application of the ideas to demanding problems is already encouraging, but creative development of the basic framework isneeded to better allow the integration of human knowledge with automated learning...

  8. Geometrical approach to fluid models

    International Nuclear Information System (INIS)

    Kuvshinov, B.N.; Schep, T.J.

    1997-01-01

    Differential geometry based upon the Cartan calculus of differential forms is applied to investigate invariant properties of equations that describe the motion of continuous media. The main feature of this approach is that physical quantities are treated as geometrical objects. The geometrical notion of invariance is introduced in terms of Lie derivatives and a general procedure for the construction of local and integral fluid invariants is presented. The solutions of the equations for invariant fields can be written in terms of Lagrange variables. A generalization of the Hamiltonian formalism for finite-dimensional systems to continuous media is proposed. Analogously to finite-dimensional systems, Hamiltonian fluids are introduced as systems that annihilate an exact two-form. It is shown that Euler and ideal, charged fluids satisfy this local definition of a Hamiltonian structure. A new class of scalar invariants of Hamiltonian fluids is constructed that generalizes the invariants that are related with gauge transformations and with symmetries (Noether). copyright 1997 American Institute of Physics

  9. Context sensitivity and ambiguity in component-based systems design

    Energy Technology Data Exchange (ETDEWEB)

    Bespalko, S.J.; Sindt, A.

    1997-10-01

    Designers of components-based, real-time systems need to guarantee to correctness of soft-ware and its output. Complexity of a system, and thus the propensity for error, is best characterized by the number of states a component can encounter. In many cases, large numbers of states arise where the processing is highly dependent on context. In these cases, states are often missed, leading to errors. The following are proposals for compactly specifying system states which allow the factoring of complex components into a control module and a semantic processing module. Further, the need for methods that allow for the explicit representation of ambiguity and uncertainty in the design of components is discussed. Presented herein are examples of real-world problems which are highly context-sensitive or are inherently ambiguous.

  10. Current approaches to gene regulatory network modelling

    Directory of Open Access Journals (Sweden)

    Brazma Alvis

    2007-09-01

    Full Text Available Abstract Many different approaches have been developed to model and simulate gene regulatory networks. We proposed the following categories for gene regulatory network models: network parts lists, network topology models, network control logic models, and dynamic models. Here we will describe some examples for each of these categories. We will study the topology of gene regulatory networks in yeast in more detail, comparing a direct network derived from transcription factor binding data and an indirect network derived from genome-wide expression data in mutants. Regarding the network dynamics we briefly describe discrete and continuous approaches to network modelling, then describe a hybrid model called Finite State Linear Model and demonstrate that some simple network dynamics can be simulated in this model.

  11. Distributed simulation a model driven engineering approach

    CERN Document Server

    Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent

    2016-01-01

    Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.

  12. Service creation: a model-based approach

    NARCIS (Netherlands)

    Quartel, Dick; van Sinderen, Marten J.; Ferreira Pires, Luis

    1999-01-01

    This paper presents a model-based approach to support service creation. In this approach, services are assumed to be created from (available) software components. The creation process may involve multiple design steps in which the requested service is repeatedly decomposed into more detailed

  13. Models of galaxies - The modal approach

    International Nuclear Information System (INIS)

    Lin, C.C.; Lowe, S.A.

    1990-01-01

    The general viability of the modal approach to the spiral structure in normal spirals and the barlike structure in certain barred spirals is discussed. The usefulness of the modal approach in the construction of models of such galaxies is examined, emphasizing the adoption of a model appropriate to observational data for both the spiral structure of a galaxy and its basic mass distribution. 44 refs

  14. Multiscale approach to equilibrating model polymer melts

    DEFF Research Database (Denmark)

    Svaneborg, Carsten; Ali Karimi-Varzaneh, Hossein; Hojdis, Nils

    2016-01-01

    We present an effective and simple multiscale method for equilibrating Kremer Grest model polymer melts of varying stiffness. In our approach, we progressively equilibrate the melt structure above the tube scale, inside the tube and finally at the monomeric scale. We make use of models designed...

  15. Application of various FLD modelling approaches

    Science.gov (United States)

    Banabic, D.; Aretz, H.; Paraianu, L.; Jurco, P.

    2005-07-01

    This paper focuses on a comparison between different modelling approaches to predict the forming limit diagram (FLD) for sheet metal forming under a linear strain path using the recently introduced orthotropic yield criterion BBC2003 (Banabic D et al 2005 Int. J. Plasticity 21 493-512). The FLD models considered here are a finite element based approach, the well known Marciniak-Kuczynski model, the modified maximum force criterion according to Hora et al (1996 Proc. Numisheet'96 Conf. (Dearborn/Michigan) pp 252-6), Swift's diffuse (Swift H W 1952 J. Mech. Phys. Solids 1 1-18) and Hill's classical localized necking approach (Hill R 1952 J. Mech. Phys. Solids 1 19-30). The FLD of an AA5182-O aluminium sheet alloy has been determined experimentally in order to quantify the predictive capabilities of the models mentioned above.

  16. Risk Modelling for Passages in Approach Channel

    Directory of Open Access Journals (Sweden)

    Leszek Smolarek

    2013-01-01

    Full Text Available Methods of multivariate statistics, stochastic processes, and simulation methods are used to identify and assess the risk measures. This paper presents the use of generalized linear models and Markov models to study risks to ships along the approach channel. These models combined with simulation testing are used to determine the time required for continuous monitoring of endangered objects or period at which the level of risk should be verified.

  17. Set-Theoretic Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan

    Despite being widely accepted and applied, maturity models in Information Systems (IS) have been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. This PhD thesis focuses on addressing...... these criticisms by incorporating recent developments in configuration theory, in particular application of set-theoretic approaches. The aim is to show the potential of employing a set-theoretic approach for maturity model research and empirically demonstrating equifinal paths to maturity. Specifically...... methodological guidelines consisting of detailed procedures to systematically apply set theoretic approaches for maturity model research and provides demonstrations of it application on three datasets. The thesis is a collection of six research papers that are written in a sequential manner. The first paper...

  18. Mathematical Modeling Approaches in Plant Metabolomics.

    Science.gov (United States)

    Fürtauer, Lisa; Weiszmann, Jakob; Weckwerth, Wolfram; Nägele, Thomas

    2018-01-01

    The experimental analysis of a plant metabolome typically results in a comprehensive and multidimensional data set. To interpret metabolomics data in the context of biochemical regulation and environmental fluctuation, various approaches of mathematical modeling have been developed and have proven useful. In this chapter, a general introduction to mathematical modeling is presented and discussed in context of plant metabolism. A particular focus is laid on the suitability of mathematical approaches to functionally integrate plant metabolomics data in a metabolic network and combine it with other biochemical or physiological parameters.

  19. SLS Navigation Model-Based Design Approach

    Science.gov (United States)

    Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas

    2018-01-01

    The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and

  20. Stochastic approaches to inflation model building

    International Nuclear Information System (INIS)

    Ramirez, Erandy; Liddle, Andrew R.

    2005-01-01

    While inflation gives an appealing explanation of observed cosmological data, there are a wide range of different inflation models, providing differing predictions for the initial perturbations. Typically models are motivated either by fundamental physics considerations or by simplicity. An alternative is to generate large numbers of models via a random generation process, such as the flow equations approach. The flow equations approach is known to predict a definite structure to the observational predictions. In this paper, we first demonstrate a more efficient implementation of the flow equations exploiting an analytic solution found by Liddle (2003). We then consider alternative stochastic methods of generating large numbers of inflation models, with the aim of testing whether the structures generated by the flow equations are robust. We find that while typically there remains some concentration of points in the observable plane under the different methods, there is significant variation in the predictions amongst the methods considered

  1. Model validation: a systemic and systematic approach

    International Nuclear Information System (INIS)

    Sheng, G.; Elzas, M.S.; Cronhjort, B.T.

    1993-01-01

    The term 'validation' is used ubiquitously in association with the modelling activities of numerous disciplines including social, political natural, physical sciences, and engineering. There is however, a wide range of definitions which give rise to very different interpretations of what activities the process involves. Analyses of results from the present large international effort in modelling radioactive waste disposal systems illustrate the urgent need to develop a common approach to model validation. Some possible explanations are offered to account for the present state of affairs. The methodology developed treats model validation and code verification in a systematic fashion. In fact, this approach may be regarded as a comprehensive framework to assess the adequacy of any simulation study. (author)

  2. [Development and application of component-based Chinese medicine theory].

    Science.gov (United States)

    Zhang, Jun-Hua; Fan, Guan-Wei; Zhang, Han; Fan, Xiao-Hui; Wang, Yi; Liu, Li-Mei; Li, Chuan; Gao, Yue; Gao, Xiu-Mei; Zhang, Bo-Li

    2017-11-01

    Traditional Chinese medicine (TCM) prescription is the main therapies for disease prevention and treatment in Chinese medicine. Following the guidance of the theory of TCM and developing drug by composing prescriptions of TCM materials and pieces, it is a traditional application mode of TCM, and still widely used in clinic. TCM prescription has theoretical advantages and rich clinical application experience in dealing with multi-factor complex diseases, but scientific research is relatively weak. The lack of scientific cognition of the effective substances and mechanism of Chinese medicine leads to insufficient understanding of the efficacy regularity, which affects the stability of effect and hinders the improvement of quality of Chinese medicinal products. Component-based Chinese medicine (CCM) is an innovation based on inheritance, which breaks through the tradition of experience-based prescription and realize the transformation of compatibility from herbal pieces to components. CCM is an important achievement during the research process of modernization of Chinese medicine. Under the support of three national "973" projects, in order to reveal the scientific connotation of the prescription compatibility theory and develop innovative Chinese drugs, we have launched theoretical innovation and technological innovation around the "two relatively clear", and opened up the research field of CCM. CCM is an innovation based on inheritance, breaking through the tradition of experience based prescription, and realizing the transformation from compatibility of herbal pieces to component compatibility, which is an important achievement of the modernization of traditional Chinese medicine. In the past more than 10 years, with the deepening of research and the expansion of application, the theory and methods of CCM and efficacy-oriented compatibility have been continuously improved. The value of CCM is not only in developing new drug, more important is to build a

  3. A Conceptual Modeling Approach for OLAP Personalization

    Science.gov (United States)

    Garrigós, Irene; Pardillo, Jesús; Mazón, Jose-Norberto; Trujillo, Juan

    Data warehouses rely on multidimensional models in order to provide decision makers with appropriate structures to intuitively analyze data with OLAP technologies. However, data warehouses may be potentially large and multidimensional structures become increasingly complex to be understood at a glance. Even if a departmental data warehouse (also known as data mart) is used, these structures would be also too complex. As a consequence, acquiring the required information is more costly than expected and decision makers using OLAP tools may get frustrated. In this context, current approaches for data warehouse design are focused on deriving a unique OLAP schema for all analysts from their previously stated information requirements, which is not enough to lighten the complexity of the decision making process. To overcome this drawback, we argue for personalizing multidimensional models for OLAP technologies according to the continuously changing user characteristics, context, requirements and behaviour. In this paper, we present a novel approach to personalizing OLAP systems at the conceptual level based on the underlying multidimensional model of the data warehouse, a user model and a set of personalization rules. The great advantage of our approach is that a personalized OLAP schema is provided for each decision maker contributing to better satisfy their specific analysis needs. Finally, we show the applicability of our approach through a sample scenario based on our CASE tool for data warehouse development.

  4. Variational approach to chiral quark models

    Energy Technology Data Exchange (ETDEWEB)

    Futami, Yasuhiko; Odajima, Yasuhiko; Suzuki, Akira

    1987-03-01

    A variational approach is applied to a chiral quark model to test the validity of the perturbative treatment of the pion-quark interaction based on the chiral symmetry principle. It is indispensably related to the chiral symmetry breaking radius if the pion-quark interaction can be regarded as a perturbation.

  5. A variational approach to chiral quark models

    International Nuclear Information System (INIS)

    Futami, Yasuhiko; Odajima, Yasuhiko; Suzuki, Akira.

    1987-01-01

    A variational approach is applied to a chiral quark model to test the validity of the perturbative treatment of the pion-quark interaction based on the chiral symmetry principle. It is indispensably related to the chiral symmetry breaking radius if the pion-quark interaction can be regarded as a perturbation. (author)

  6. A Set Theoretical Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    characterized by equifinality, multiple conjunctural causation, and case diversity. We prescribe methodological guidelines consisting of a six-step procedure to systematically apply set theoretic methods to conceptualize, develop, and empirically derive maturity models and provide a demonstration......Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models...

  7. A hybrid modeling approach for option pricing

    Science.gov (United States)

    Hajizadeh, Ehsan; Seifi, Abbas

    2011-11-01

    The complexity of option pricing has led many researchers to develop sophisticated models for such purposes. The commonly used Black-Scholes model suffers from a number of limitations. One of these limitations is the assumption that the underlying probability distribution is lognormal and this is so controversial. We propose a couple of hybrid models to reduce these limitations and enhance the ability of option pricing. The key input to option pricing model is volatility. In this paper, we use three popular GARCH type model for estimating volatility. Then, we develop two non-parametric models based on neural networks and neuro-fuzzy networks to price call options for S&P 500 index. We compare the results with those of Black-Scholes model and show that both neural network and neuro-fuzzy network models outperform Black-Scholes model. Furthermore, comparing the neural network and neuro-fuzzy approaches, we observe that for at-the-money options, neural network model performs better and for both in-the-money and an out-of-the money option, neuro-fuzzy model provides better results.

  8. Robust and Effective Component-based Banknote Recognition by SURF Features.

    Science.gov (United States)

    Hasanuzzaman, Faiz M; Yang, Xiaodong; Tian, YingLi

    2011-01-01

    Camera-based computer vision technology is able to assist visually impaired people to automatically recognize banknotes. A good banknote recognition algorithm for blind or visually impaired people should have the following features: 1) 100% accuracy, and 2) robustness to various conditions in different environments and occlusions. Most existing algorithms of banknote recognition are limited to work for restricted conditions. In this paper we propose a component-based framework for banknote recognition by using Speeded Up Robust Features (SURF). The component-based framework is effective in collecting more class-specific information and robust in dealing with partial occlusion and viewpoint changes. Furthermore, the evaluation of SURF demonstrates its effectiveness in handling background noise, image rotation, scale, and illumination changes. To authenticate the robustness and generalizability of the proposed approach, we have collected a large dataset of banknotes from a variety of conditions including occlusion, cluttered background, rotation, and changes of illumination, scaling, and viewpoints. The proposed algorithm achieves 100% recognition rate on our challenging dataset.

  9. Heat transfer modeling an inductive approach

    CERN Document Server

    Sidebotham, George

    2015-01-01

    This innovative text emphasizes a "less-is-more" approach to modeling complicated systems such as heat transfer by treating them first as "1-node lumped models" that yield simple closed-form solutions. The author develops numerical techniques for students to obtain more detail, but also trains them to use the techniques only when simpler approaches fail. Covering all essential methods offered in traditional texts, but with a different order, Professor Sidebotham stresses inductive thinking and problem solving as well as a constructive understanding of modern, computer-based practice. Readers learn to develop their own code in the context of the material, rather than just how to use packaged software, offering a deeper, intrinsic grasp behind models of heat transfer. Developed from over twenty-five years of lecture notes to teach students of mechanical and chemical engineering at The Cooper Union for the Advancement of Science and Art, the book is ideal for students and practitioners across engineering discipl...

  10. Nonperturbative approach to the attractive Hubbard model

    International Nuclear Information System (INIS)

    Allen, S.; Tremblay, A.-M. S.

    2001-01-01

    A nonperturbative approach to the single-band attractive Hubbard model is presented in the general context of functional-derivative approaches to many-body theories. As in previous work on the repulsive model, the first step is based on a local-field-type ansatz, on enforcement of the Pauli principle and a number of crucial sumrules. The Mermin-Wagner theorem in two dimensions is automatically satisfied. At this level, two-particle self-consistency has been achieved. In the second step of the approximation, an improved expression for the self-energy is obtained by using the results of the first step in an exact expression for the self-energy, where the high- and low-frequency behaviors appear separately. The result is a cooperon-like formula. The required vertex corrections are included in this self-energy expression, as required by the absence of a Migdal theorem for this problem. Other approaches to the attractive Hubbard model are critically compared. Physical consequences of the present approach and agreement with Monte Carlo simulations are demonstrated in the accompanying paper (following this one)

  11. Quasirelativistic quark model in quasipotential approach

    CERN Document Server

    Matveev, V A; Savrin, V I; Sissakian, A N

    2002-01-01

    The relativistic particles interaction is described within the frames of quasipotential approach. The presentation is based on the so called covariant simultaneous formulation of the quantum field theory, where by the theory is considered on the spatial-like three-dimensional hypersurface in the Minkowski space. Special attention is paid to the methods of plotting various quasipotentials as well as to the applications of the quasipotential approach to describing the characteristics of the relativistic particles interaction in the quark models, namely: the hadrons elastic scattering amplitudes, the mass spectra and widths mesons decays, the cross sections of the deep inelastic leptons scattering on the hadrons

  12. A multiscale modeling approach for biomolecular systems

    Energy Technology Data Exchange (ETDEWEB)

    Bowling, Alan, E-mail: bowling@uta.edu; Haghshenas-Jaryani, Mahdi, E-mail: mahdi.haghshenasjaryani@mavs.uta.edu [The University of Texas at Arlington, Department of Mechanical and Aerospace Engineering (United States)

    2015-04-15

    This paper presents a new multiscale molecular dynamic model for investigating the effects of external interactions, such as contact and impact, during stepping and docking of motor proteins and other biomolecular systems. The model retains the mass properties ensuring that the result satisfies Newton’s second law. This idea is presented using a simple particle model to facilitate discussion of the rigid body model; however, the particle model does provide insights into particle dynamics at the nanoscale. The resulting three-dimensional model predicts a significant decrease in the effect of the random forces associated with Brownian motion. This conclusion runs contrary to the widely accepted notion that the motor protein’s movements are primarily the result of thermal effects. This work focuses on the mechanical aspects of protein locomotion; the effect ATP hydrolysis is estimated as internal forces acting on the mechanical model. In addition, the proposed model can be numerically integrated in a reasonable amount of time. Herein, the differences between the motion predicted by the old and new modeling approaches are compared using a simplified model of myosin V.

  13. A new approach for developing adjoint models

    Science.gov (United States)

    Farrell, P. E.; Funke, S. W.

    2011-12-01

    Many data assimilation algorithms rely on the availability of gradients of misfit functionals, which can be efficiently computed with adjoint models. However, the development of an adjoint model for a complex geophysical code is generally very difficult. Algorithmic differentiation (AD, also called automatic differentiation) offers one strategy for simplifying this task: it takes the abstraction that a model is a sequence of primitive instructions, each of which may be differentiated in turn. While extremely successful, this low-level abstraction runs into time-consuming difficulties when applied to the whole codebase of a model, such as differentiating through linear solves, model I/O, calls to external libraries, language features that are unsupported by the AD tool, and the use of multiple programming languages. While these difficulties can be overcome, it requires a large amount of technical expertise and an intimate familiarity with both the AD tool and the model. An alternative to applying the AD tool to the whole codebase is to assemble the discrete adjoint equations and use these to compute the necessary gradients. With this approach, the AD tool must be applied to the nonlinear assembly operators, which are typically small, self-contained units of the codebase. The disadvantage of this approach is that the assembly of the discrete adjoint equations is still very difficult to perform correctly, especially for complex multiphysics models that perform temporal integration; as it stands, this approach is as difficult and time-consuming as applying AD to the whole model. In this work, we have developed a library which greatly simplifies and automates the alternate approach of assembling the discrete adjoint equations. We propose a complementary, higher-level abstraction to that of AD: that a model is a sequence of linear solves. The developer annotates model source code with library calls that build a 'tape' of the operators involved and their dependencies, and

  14. Eutrophication Modeling Using Variable Chlorophyll Approach

    International Nuclear Information System (INIS)

    Abdolabadi, H.; Sarang, A.; Ardestani, M.; Mahjoobi, E.

    2016-01-01

    In this study, eutrophication was investigated in Lake Ontario to identify the interactions among effective drivers. The complexity of such phenomenon was modeled using a system dynamics approach based on a consideration of constant and variable stoichiometric ratios. The system dynamics approach is a powerful tool for developing object-oriented models to simulate complex phenomena that involve feedback effects. Utilizing stoichiometric ratios is a method for converting the concentrations of state variables. During the physical segmentation of the model, Lake Ontario was divided into two layers, i.e., the epilimnion and hypolimnion, and differential equations were developed for each layer. The model structure included 16 state variables related to phytoplankton, herbivorous zooplankton, carnivorous zooplankton, ammonium, nitrate, dissolved phosphorus, and particulate and dissolved carbon in the epilimnion and hypolimnion during a time horizon of one year. The results of several tests to verify the model, close to 1 Nash-Sutcliff coefficient (0.98), the data correlation coefficient (0.98), and lower standard errors (0.96), have indicated well-suited model’s efficiency. The results revealed that there were significant differences in the concentrations of the state variables in constant and variable stoichiometry simulations. Consequently, the consideration of variable stoichiometric ratios in algae and nutrient concentration simulations may be applied in future modeling studies to enhance the accuracy of the results and reduce the likelihood of inefficient control policies.

  15. Comparisons of Multilevel Modeling and Structural Equation Modeling Approaches to Actor-Partner Interdependence Model.

    Science.gov (United States)

    Hong, Sehee; Kim, Soyoung

    2018-01-01

    There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.

  16. Evolutionary modeling-based approach for model errors correction

    Directory of Open Access Journals (Sweden)

    S. Q. Wan

    2012-08-01

    Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."

    On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  17. MODELS OF TECHNOLOGY ADOPTION: AN INTEGRATIVE APPROACH

    Directory of Open Access Journals (Sweden)

    Andrei OGREZEANU

    2015-06-01

    Full Text Available The interdisciplinary study of information technology adoption has developed rapidly over the last 30 years. Various theoretical models have been developed and applied such as: the Technology Acceptance Model (TAM, Innovation Diffusion Theory (IDT, Theory of Planned Behavior (TPB, etc. The result of these many years of research is thousands of contributions to the field, which, however, remain highly fragmented. This paper develops a theoretical model of technology adoption by integrating major theories in the field: primarily IDT, TAM, and TPB. To do so while avoiding mess, an approach that goes back to basics in independent variable type’s development is proposed; emphasizing: 1 the logic of classification, and 2 psychological mechanisms behind variable types. Once developed these types are then populated with variables originating in empirical research. Conclusions are developed on which types are underpopulated and present potential for future research. I end with a set of methodological recommendations for future application of the model.

  18. Interfacial Fluid Mechanics A Mathematical Modeling Approach

    CERN Document Server

    Ajaev, Vladimir S

    2012-01-01

    Interfacial Fluid Mechanics: A Mathematical Modeling Approach provides an introduction to mathematical models of viscous flow used in rapidly developing fields of microfluidics and microscale heat transfer. The basic physical effects are first introduced in the context of simple configurations and their relative importance in typical microscale applications is discussed. Then,several configurations of importance to microfluidics, most notably thin films/droplets on substrates and confined bubbles, are discussed in detail.  Topics from current research on electrokinetic phenomena, liquid flow near structured solid surfaces, evaporation/condensation, and surfactant phenomena are discussed in the later chapters. This book also:  Discusses mathematical models in the context of actual applications such as electrowetting Includes unique material on fluid flow near structured surfaces and phase change phenomena Shows readers how to solve modeling problems related to microscale multiphase flows Interfacial Fluid Me...

  19. A new modelling approach for zooplankton behaviour

    Science.gov (United States)

    Keiyu, A. Y.; Yamazaki, H.; Strickler, J. R.

    We have developed a new simulation technique to model zooplankton behaviour. The approach utilizes neither the conventional artificial intelligence nor neural network methods. We have designed an adaptive behaviour network, which is similar to BEER [(1990) Intelligence as an adaptive behaviour: an experiment in computational neuroethology, Academic Press], based on observational studies of zooplankton behaviour. The proposed method is compared with non- "intelligent" models—random walk and correlated walk models—as well as observed behaviour in a laboratory tank. Although the network is simple, the model exhibits rich behavioural patterns similar to live copepods.

  20. Continuum modeling an approach through practical examples

    CERN Document Server

    Muntean, Adrian

    2015-01-01

    This book develops continuum modeling skills and approaches the topic from three sides: (1) derivation of global integral laws together with the associated local differential equations, (2) design of constitutive laws and (3) modeling boundary processes. The focus of this presentation lies on many practical examples covering aspects such as coupled flow, diffusion and reaction in porous media or microwave heating of a pizza, as well as traffic issues in bacterial colonies and energy harvesting from geothermal wells. The target audience comprises primarily graduate students in pure and applied mathematics as well as working practitioners in engineering who are faced by nonstandard rheological topics like those typically arising in the food industry.

  1. COMDES-II: A Component-Based Framework for Generative Development of Distributed Real-Time Control Systems

    DEFF Research Database (Denmark)

    Ke, Xu; Sierszecki, Krzysztof; Angelov, Christo K.

    2007-01-01

    The paper presents a generative development methodology and component models of COMDES-II, a component-based software framework for distributed embedded control systems with real-time constraints. The adopted methodology allows for rapid modeling and validation of control software at a higher lev...... methodology for COMDES-II from a general perspective, describes the component models in details and demonstrates their application through a DC-Motor control system case study.......The paper presents a generative development methodology and component models of COMDES-II, a component-based software framework for distributed embedded control systems with real-time constraints. The adopted methodology allows for rapid modeling and validation of control software at a higher level...

  2. Global Environmental Change: An integrated modelling approach

    International Nuclear Information System (INIS)

    Den Elzen, M.

    1993-01-01

    Two major global environmental problems are dealt with: climate change and stratospheric ozone depletion (and their mutual interactions), briefly surveyed in part 1. In Part 2 a brief description of the integrated modelling framework IMAGE 1.6 is given. Some specific parts of the model are described in more detail in other Chapters, e.g. the carbon cycle model, the atmospheric chemistry model, the halocarbon model, and the UV-B impact model. In Part 3 an uncertainty analysis of climate change and stratospheric ozone depletion is presented (Chapter 4). Chapter 5 briefly reviews the social and economic uncertainties implied by future greenhouse gas emissions. Chapters 6 and 7 describe a model and sensitivity analysis pertaining to the scientific uncertainties and/or lacunae in the sources and sinks of methane and carbon dioxide, and their biogeochemical feedback processes. Chapter 8 presents an uncertainty and sensitivity analysis of the carbon cycle model, the halocarbon model, and the IMAGE model 1.6 as a whole. Part 4 presents the risk assessment methodology as applied to the problems of climate change and stratospheric ozone depletion more specifically. In Chapter 10, this methodology is used as a means with which to asses current ozone policy and a wide range of halocarbon policies. Chapter 11 presents and evaluates the simulated globally-averaged temperature and sea level rise (indicators) for the IPCC-1990 and 1992 scenarios, concluding with a Low Risk scenario, which would meet the climate targets. Chapter 12 discusses the impact of sea level rise on the frequency of the Dutch coastal defence system (indicator) for the IPCC-1990 scenarios. Chapter 13 presents projections of mortality rates due to stratospheric ozone depletion based on model simulations employing the UV-B chain model for a number of halocarbon policies. Chapter 14 presents an approach for allocating future emissions of CO 2 among regions. (Abstract Truncated)

  3. A flexible framework for sparse simultaneous component based data integration

    Directory of Open Access Journals (Sweden)

    Van Deun Katrijn

    2011-11-01

    Full Text Available Abstract 1 Background High throughput data are complex and methods that reveal structure underlying the data are most useful. Principal component analysis, frequently implemented as a singular value decomposition, is a popular technique in this respect. Nowadays often the challenge is to reveal structure in several sources of information (e.g., transcriptomics, proteomics that are available for the same biological entities under study. Simultaneous component methods are most promising in this respect. However, the interpretation of the principal and simultaneous components is often daunting because contributions of each of the biomolecules (transcripts, proteins have to be taken into account. 2 Results We propose a sparse simultaneous component method that makes many of the parameters redundant by shrinking them to zero. It includes principal component analysis, sparse principal component analysis, and ordinary simultaneous component analysis as special cases. Several penalties can be tuned that account in different ways for the block structure present in the integrated data. This yields known sparse approaches as the lasso, the ridge penalty, the elastic net, the group lasso, sparse group lasso, and elitist lasso. In addition, the algorithmic results can be easily transposed to the context of regression. Metabolomics data obtained with two measurement platforms for the same set of Escherichia coli samples are used to illustrate the proposed methodology and the properties of different penalties with respect to sparseness across and within data blocks. 3 Conclusion Sparse simultaneous component analysis is a useful method for data integration: First, simultaneous analyses of multiple blocks offer advantages over sequential and separate analyses and second, interpretation of the results is highly facilitated by their sparseness. The approach offered is flexible and allows to take the block structure in different ways into account. As such

  4. A flexible framework for sparse simultaneous component based data integration.

    Science.gov (United States)

    Van Deun, Katrijn; Wilderjans, Tom F; van den Berg, Robert A; Antoniadis, Anestis; Van Mechelen, Iven

    2011-11-15

    High throughput data are complex and methods that reveal structure underlying the data are most useful. Principal component analysis, frequently implemented as a singular value decomposition, is a popular technique in this respect. Nowadays often the challenge is to reveal structure in several sources of information (e.g., transcriptomics, proteomics) that are available for the same biological entities under study. Simultaneous component methods are most promising in this respect. However, the interpretation of the principal and simultaneous components is often daunting because contributions of each of the biomolecules (transcripts, proteins) have to be taken into account. We propose a sparse simultaneous component method that makes many of the parameters redundant by shrinking them to zero. It includes principal component analysis, sparse principal component analysis, and ordinary simultaneous component analysis as special cases. Several penalties can be tuned that account in different ways for the block structure present in the integrated data. This yields known sparse approaches as the lasso, the ridge penalty, the elastic net, the group lasso, sparse group lasso, and elitist lasso. In addition, the algorithmic results can be easily transposed to the context of regression. Metabolomics data obtained with two measurement platforms for the same set of Escherichia coli samples are used to illustrate the proposed methodology and the properties of different penalties with respect to sparseness across and within data blocks. Sparse simultaneous component analysis is a useful method for data integration: First, simultaneous analyses of multiple blocks offer advantages over sequential and separate analyses and second, interpretation of the results is highly facilitated by their sparseness. The approach offered is flexible and allows to take the block structure in different ways into account. As such, structures can be found that are exclusively tied to one data platform

  5. Crime Modeling using Spatial Regression Approach

    Science.gov (United States)

    Saleh Ahmar, Ansari; Adiatma; Kasim Aidid, M.

    2018-01-01

    Act of criminality in Indonesia increased both variety and quantity every year. As murder, rape, assault, vandalism, theft, fraud, fencing, and other cases that make people feel unsafe. Risk of society exposed to crime is the number of reported cases in the police institution. The higher of the number of reporter to the police institution then the number of crime in the region is increasing. In this research, modeling criminality in South Sulawesi, Indonesia with the dependent variable used is the society exposed to the risk of crime. Modelling done by area approach is the using Spatial Autoregressive (SAR) and Spatial Error Model (SEM) methods. The independent variable used is the population density, the number of poor population, GDP per capita, unemployment and the human development index (HDI). Based on the analysis using spatial regression can be shown that there are no dependencies spatial both lag or errors in South Sulawesi.

  6. Merging Digital Surface Models Implementing Bayesian Approaches

    Science.gov (United States)

    Sadeq, H.; Drummond, J.; Li, Z.

    2016-06-01

    In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  7. MERGING DIGITAL SURFACE MODELS IMPLEMENTING BAYESIAN APPROACHES

    Directory of Open Access Journals (Sweden)

    H. Sadeq

    2016-06-01

    Full Text Available In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades. It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  8. Analysis of safety culture components based on site interviews

    International Nuclear Information System (INIS)

    Ueno, Akira; Nagano, Yuko; Matsuura, Shojiro

    2002-01-01

    Safety culture of an organization is influenced by many factors such as employee's moral, safety policy of top management and questioning attitude among site staff. First this paper analyzes key factors of safety culture on the basis of site interviews. Then the paper presents a safety culture composite model and its applicability in various contexts. (author)

  9. Component-based analysis of embedded control applications

    DEFF Research Database (Denmark)

    Angelov, Christo K.; Guan, Wei; Marian, Nicolae

    2011-01-01

    instances of reusable, executable components—function blocks (FBs). System actors operate in accordance with a timed multitasking model of computation, whereby I/O signals are exchanged with the controlled plant at precisely specified time instants, resulting in the elimination of I/O jitter. The paper...

  10. A nationwide modelling approach to decommissioning - 16182

    International Nuclear Information System (INIS)

    Kelly, Bernard; Lowe, Andy; Mort, Paul

    2009-01-01

    In this paper we describe a proposed UK national approach to modelling decommissioning. For the first time, we shall have an insight into optimizing the safety and efficiency of a national decommissioning strategy. To do this we use the General Case Integrated Waste Algorithm (GIA), a universal model of decommissioning nuclear plant, power plant, waste arisings and the associated knowledge capture. The model scales from individual items of plant through cells, groups of cells, buildings, whole sites and then on up to a national scale. We describe the national vision for GIA which can be broken down into three levels: 1) the capture of the chronological order of activities that an experienced decommissioner would use to decommission any nuclear facility anywhere in the world - this is Level 1 of GIA; 2) the construction of an Operational Research (OR) model based on Level 1 to allow rapid what if scenarios to be tested quickly (Level 2); 3) the construction of a state of the art knowledge capture capability that allows future generations to learn from our current decommissioning experience (Level 3). We show the progress to date in developing GIA in levels 1 and 2. As part of level 1, GIA has assisted in the development of an IMechE professional decommissioning qualification. Furthermore, we describe GIA as the basis of a UK-Owned database of decommissioning norms for such things as costs, productivity, durations etc. From level 2, we report on a pilot study that has successfully tested the basic principles for the OR numerical simulation of the algorithm. We then highlight the advantages of applying the OR modelling approach nationally. In essence, a series of 'what if...' scenarios can be tested that will improve the safety and efficiency of decommissioning. (authors)

  11. Modeling in transport phenomena a conceptual approach

    CERN Document Server

    Tosun, Ismail

    2007-01-01

    Modeling in Transport Phenomena, Second Edition presents and clearly explains with example problems the basic concepts and their applications to fluid flow, heat transfer, mass transfer, chemical reaction engineering and thermodynamics. A balanced approach is presented between analysis and synthesis, students will understand how to use the solution in engineering analysis. Systematic derivations of the equations and the physical significance of each term are given in detail, for students to easily understand and follow up the material. There is a strong incentive in science and engineering to

  12. Nuclear physics for applications. A model approach

    International Nuclear Information System (INIS)

    Prussin, S.G.

    2007-01-01

    Written by a researcher and teacher with experience at top institutes in the US and Europe, this textbook provides advanced undergraduates minoring in physics with working knowledge of the principles of nuclear physics. Simplifying models and approaches reveal the essence of the principles involved, with the mathematical and quantum mechanical background integrated in the text where it is needed and not relegated to the appendices. The practicality of the book is enhanced by numerous end-of-chapter problems and solutions available on the Wiley homepage. (orig.)

  13. Pedagogic process modeling: Humanistic-integrative approach

    Directory of Open Access Journals (Sweden)

    Boritko Nikolaj M.

    2007-01-01

    Full Text Available The paper deals with some current problems of modeling the dynamics of the subject-features development of the individual. The term "process" is considered in the context of the humanistic-integrative approach, in which the principles of self education are regarded as criteria for efficient pedagogic activity. Four basic characteristics of the pedagogic process are pointed out: intentionality reflects logicality and regularity of the development of the process; discreteness (stageability in dicates qualitative stages through which the pedagogic phenomenon passes; nonlinearity explains the crisis character of pedagogic processes and reveals inner factors of self-development; situationality requires a selection of pedagogic conditions in accordance with the inner factors, which would enable steering the pedagogic process. Offered are two steps for singling out a particular stage and the algorithm for developing an integrative model for it. The suggested conclusions might be of use for further theoretic research, analyses of educational practices and for realistic predicting of pedagogical phenomena. .

  14. A novel approach to pipeline tensioner modeling

    Energy Technology Data Exchange (ETDEWEB)

    O' Grady, Robert; Ilie, Daniel; Lane, Michael [MCS Software Division, Galway (Ireland)

    2009-07-01

    As subsea pipeline developments continue to move into deep and ultra-deep water locations, there is an increasing need for the accurate prediction of expected pipeline fatigue life. A significant factor that must be considered as part of this process is the fatigue damage sustained by the pipeline during installation. The magnitude of this installation-related damage is governed by a number of different agents, one of which is the dynamic behavior of the tensioner systems during pipe-laying operations. There are a variety of traditional finite element methods for representing dynamic tensioner behavior. These existing methods, while basic in nature, have been proven to provide adequate forecasts in terms of the dynamic variation in typical installation parameters such as top tension and sagbend/overbend strain. However due to the simplicity of these current approaches, some of them tend to over-estimate the frequency of tensioner pay out/in under dynamic loading. This excessive level of pay out/in motion results in the prediction of additional stress cycles at certain roller beds, which in turn leads to the prediction of unrealistic fatigue damage to the pipeline. This unwarranted fatigue damage then equates to an over-conservative value for the accumulated damage experienced by a pipeline weld during installation, and so leads to a reduction in the estimated fatigue life for the pipeline. This paper describes a novel approach to tensioner modeling which allows for greater control over the velocity of dynamic tensioner pay out/in and so provides a more accurate estimation of fatigue damage experienced by the pipeline during installation. The paper reports on a case study, as outlined in the proceeding section, in which a comparison is made between results from this new tensioner model and from a more conventional approach. The comparison considers typical installation parameters as well as an in-depth look at the predicted fatigue damage for the two methods

  15. Aquarius' Object-Oriented, Plug and Play Component-Based Flight Software

    Science.gov (United States)

    Murray, Alexander; Shahabuddin, Mohammad

    2013-01-01

    The Aquarius mission involves a combined radiometer and radar instrument in low-Earth orbit, providing monthly global maps of Sea Surface Salinity. Operating successfully in orbit since June, 2011, the spacecraft bus was furnished by the Argentine space agency, Comision Nacional de Actividades Espaciales (CONAE). The instrument, built jointly by NASA's Caltech/JPL and Goddard Space Flight Center, has been successfully producing expectation-exceeding data since it was powered on in August of 2011. In addition to the radiometer and scatterometer, the instrument contains an command & data-handling subsystem with a computer and flight software (FSW) that is responsible for managing the instrument, its operation, and its data. Aquarius' FSW is conceived and architected as a Component-based system, in which the running software consists of a set of Components, each playing a distinctive role in the subsystem, instantiated and connected together at runtime. Component architectures feature a well-defined set of interfaces between the Components, visible and analyzable at the architectural level (see [1]). As we will describe, this kind of an architecture offers significant advantages over more traditional FSW architectures, which often feature a monolithic runtime structure. Component-based software is enabled by Object-Oriented (OO) techniques and languages, the use of which again is not typical in space mission FSW. We will argue in this paper that the use of OO design methods and tools (especially the Unified Modeling Language), as well as the judicious usage of C++, are very well suited to FSW applications, and we will present Aquarius FSW, describing our methods, processes, and design, as a successful case in point.

  16. Approaches and models of intercultural education

    Directory of Open Access Journals (Sweden)

    Iván Manuel Sánchez Fontalvo

    2013-10-01

    Full Text Available Needed to be aware of the need to build an intercultural society, awareness must be assumed in all social spheres, where stands the role play education. A role of transcendental, since it must promote educational spaces to form people with virtues and powers that allow them to live together / as in multicultural contexts and social diversities (sometimes uneven in an increasingly globalized and interconnected world, and foster the development of feelings of civic belonging shared before the neighborhood, city, region and country, allowing them concern and critical judgement to marginalization, poverty, misery and inequitable distribution of wealth, causes of structural violence, but at the same time, wanting to work for the welfare and transformation of these scenarios. Since these budgets, it is important to know the approaches and models of intercultural education that have been developed so far, analysing their impact on the contexts educational where apply.   

  17. Transport modeling: An artificial immune system approach

    Directory of Open Access Journals (Sweden)

    Teodorović Dušan

    2006-01-01

    Full Text Available This paper describes an artificial immune system approach (AIS to modeling time-dependent (dynamic, real time transportation phenomenon characterized by uncertainty. The basic idea behind this research is to develop the Artificial Immune System, which generates a set of antibodies (decisions, control actions that altogether can successfully cover a wide range of potential situations. The proposed artificial immune system develops antibodies (the best control strategies for different antigens (different traffic "scenarios". This task is performed using some of the optimization or heuristics techniques. Then a set of antibodies is combined to create Artificial Immune System. The developed Artificial Immune transportation systems are able to generalize, adapt, and learn based on new knowledge and new information. Applications of the systems are considered for airline yield management, the stochastic vehicle routing, and real-time traffic control at the isolated intersection. The preliminary research results are very promising.

  18. System approach to modeling of industrial technologies

    Science.gov (United States)

    Toropov, V. S.; Toropov, E. S.

    2018-03-01

    The authors presented a system of methods for modeling and improving industrial technologies. The system consists of information and software. The information part is structured information about industrial technologies. The structure has its template. The template has several essential categories used to improve the technological process and eliminate weaknesses in the process chain. The base category is the physical effect that takes place when the technical process proceeds. The programming part of the system can apply various methods of creative search to the content stored in the information part of the system. These methods pay particular attention to energy transformations in the technological process. The system application will allow us to systematize the approach to improving technologies and obtaining new technical solutions.

  19. A UML profile for code generation of component based distributed systems

    International Nuclear Information System (INIS)

    Chiozzi, G.; Karban, R.; Andolfato, L.; Tejeda, A.

    2012-01-01

    A consistent and unambiguous implementation of code generation (model to text transformation) from UML (must rely on a well defined UML (Unified Modelling Language) profile, customizing UML for a particular application domain. Such a profile must have a solid foundation in a formally correct ontology, formalizing the concepts and their relations in the specific domain, in order to avoid a maze or set of wildly created stereotypes. The paper describes a generic profile for the code generation of component based distributed systems for control applications, the process to distill the ontology and define the profile, and the strategy followed to implement the code generator. The main steps that take place iteratively include: defining the terms and relations with an ontology, mapping the ontology to the appropriate UML meta-classes, testing the profile by creating modelling examples, and generating the code. This has allowed us to work on the modelling of E-ELT (European Extremely Large Telescope) control system and instrumentation without knowing what infrastructure will be finally used

  20. A component-based system for agricultural drought monitoring by remote sensing.

    Science.gov (United States)

    Dong, Heng; Li, Jun; Yuan, Yanbin; You, Lin; Chen, Chao

    2017-01-01

    In recent decades, various kinds of remote sensing-based drought indexes have been proposed and widely used in the field of drought monitoring. However, the drought-related software and platform development lag behind the theoretical research. The current drought monitoring systems focus mainly on information management and publishing, and cannot implement professional drought monitoring or parameter inversion modelling, especially the models based on multi-dimensional feature space. In view of the above problems, this paper aims at fixing this gap with a component-based system named RSDMS to facilitate the application of drought monitoring by remote sensing. The system is designed and developed based on Component Object Model (COM) to ensure the flexibility and extendibility of modules. RSDMS realizes general image-related functions such as data management, image display, spatial reference management, image processing and analysis, and further provides drought monitoring and evaluation functions based on internal and external models. Finally, China's Ningxia region is selected as the study area to validate the performance of RSDMS. The experimental results show that RSDMS provide an efficient and scalable support to agricultural drought monitoring.

  1. A component-based system for agricultural drought monitoring by remote sensing.

    Directory of Open Access Journals (Sweden)

    Heng Dong

    Full Text Available In recent decades, various kinds of remote sensing-based drought indexes have been proposed and widely used in the field of drought monitoring. However, the drought-related software and platform development lag behind the theoretical research. The current drought monitoring systems focus mainly on information management and publishing, and cannot implement professional drought monitoring or parameter inversion modelling, especially the models based on multi-dimensional feature space. In view of the above problems, this paper aims at fixing this gap with a component-based system named RSDMS to facilitate the application of drought monitoring by remote sensing. The system is designed and developed based on Component Object Model (COM to ensure the flexibility and extendibility of modules. RSDMS realizes general image-related functions such as data management, image display, spatial reference management, image processing and analysis, and further provides drought monitoring and evaluation functions based on internal and external models. Finally, China's Ningxia region is selected as the study area to validate the performance of RSDMS. The experimental results show that RSDMS provide an efficient and scalable support to agricultural drought monitoring.

  2. ECOMOD - An ecological approach to radioecological modelling

    International Nuclear Information System (INIS)

    Sazykina, Tatiana G.

    2000-01-01

    A unified methodology is proposed to simulate the dynamic processes of radionuclide migration in aquatic food chains in parallel with their stable analogue elements. The distinguishing feature of the unified radioecological/ecological approach is the description of radionuclide migration along with dynamic equations for the ecosystem. The ability of the methodology to predict the results of radioecological experiments is demonstrated by an example of radionuclide (iron group) accumulation by a laboratory culture of the algae Platymonas viridis. Based on the unified methodology, the 'ECOMOD' radioecological model was developed to simulate dynamic radioecological processes in aquatic ecosystems. It comprises three basic modules, which are operated as a set of inter-related programs. The 'ECOSYSTEM' module solves non-linear ecological equations, describing the biomass dynamics of essential ecosystem components. The 'RADIONUCLIDE DISTRIBUTION' module calculates the radionuclide distribution in abiotic and biotic components of the aquatic ecosystem. The 'DOSE ASSESSMENT' module calculates doses to aquatic biota and doses to man from aquatic food chains. The application of the ECOMOD model to reconstruct the radionuclide distribution in the Chernobyl Cooling Pond ecosystem in the early period after the accident shows good agreement with observations

  3. Modelling Approach In Islamic Architectural Designs

    Directory of Open Access Journals (Sweden)

    Suhaimi Salleh

    2014-06-01

    Full Text Available Architectural designs contribute as one of the main factors that should be considered in minimizing negative impacts in planning and structural development in buildings such as in mosques. In this paper, the ergonomics perspective is revisited which hence focuses on the conditional factors involving organisational, psychological, social and population as a whole. This paper tries to highlight the functional and architectural integration with ecstatic elements in the form of decorative and ornamental outlay as well as incorporating the building structure such as wall, domes and gates. This paper further focuses the mathematical aspects of the architectural designs such as polar equations and the golden ratio. These designs are modelled into mathematical equations of various forms, while the golden ratio in mosque is verified using two techniques namely, the geometric construction and the numerical method. The exemplary designs are taken from theSabah Bandaraya Mosque in Likas, Kota Kinabalu and the Sarawak State Mosque in Kuching,while the Universiti Malaysia Sabah Mosque is used for the Golden Ratio. Results show thatIslamic architectural buildings and designs have long had mathematical concepts and techniques underlying its foundation, hence, a modelling approach is needed to rejuvenate these Islamic designs.

  4. An integrated approach to permeability modeling using micro-models

    Energy Technology Data Exchange (ETDEWEB)

    Hosseini, A.H.; Leuangthong, O.; Deutsch, C.V. [Society of Petroleum Engineers, Canadian Section, Calgary, AB (Canada)]|[Alberta Univ., Edmonton, AB (Canada)

    2008-10-15

    An important factor in predicting the performance of steam assisted gravity drainage (SAGD) well pairs is the spatial distribution of permeability. Complications that make the inference of a reliable porosity-permeability relationship impossible include the presence of short-scale variability in sand/shale sequences; preferential sampling of core data; and uncertainty in upscaling parameters. Micro-modelling is a simple and effective method for overcoming these complications. This paper proposed a micro-modeling approach to account for sampling bias, small laminated features with high permeability contrast, and uncertainty in upscaling parameters. The paper described the steps and challenges of micro-modeling and discussed the construction of binary mixture geo-blocks; flow simulation and upscaling; extended power law formalism (EPLF); and the application of micro-modeling and EPLF. An extended power-law formalism to account for changes in clean sand permeability as a function of macroscopic shale content was also proposed and tested against flow simulation results. There was close agreement between the model and simulation results. The proposed methodology was also applied to build the porosity-permeability relationship for laminated and brecciated facies of McMurray oil sands. Experimental data was in good agreement with the experimental data. 8 refs., 17 figs.

  5. Risk communication: a mental models approach

    National Research Council Canada - National Science Library

    Morgan, M. Granger (Millett Granger)

    2002-01-01

    ... information about risks. The procedure uses approaches from risk and decision analysis to identify the most relevant information; it also uses approaches from psychology and communication theory to ensure that its message is understood. This book is written in nontechnical terms, designed to make the approach feasible for anyone willing to try it. It is illustrat...

  6. An open, component-based information infrastructure for integrated health information networks.

    Science.gov (United States)

    Tsiknakis, Manolis; Katehakis, Dimitrios G; Orphanoudakis, Stelios C

    2002-12-18

    A fundamental requirement for achieving continuity of care is the seamless sharing of multimedia clinical information. Different technological approaches can be adopted for enabling the communication and sharing of health record segments. In the context of the emerging global information society, the creation of and access to the integrated electronic health record (I-EHR) of a citizen has been assigned high priority in many countries. This requirement is complementary to an overall requirement for the creation of a health information infrastructure (HII) to support the provision of a variety of health telematics and e-health services. In developing a regional or national HII, the components or building blocks that make up the overall information system ought to be defined and an appropriate component architecture specified. This paper discusses current international priorities and trends in developing the HII. It presents technological challenges and alternative approaches towards the creation of an I-EHR, being the aggregation of health data created during all interactions of an individual with the healthcare system. It also presents results from an ongoing Research and Development (R&D) effort towards the implementation of the HII in HYGEIAnet, the regional health information network of Crete, Greece, using a component-based software engineering approach. Critical design decisions and related trade-offs, involved in the process of component specification and development, are also discussed and the current state of development of an I-EHR service is presented. Finally, Human Computer Interaction (HCI) and security issues, which are important for the deployment and use of any I-EHR service, are considered.

  7. EEG-Based Emotion Recognition Using Deep Learning Network with Principal Component Based Covariate Shift Adaptation

    Directory of Open Access Journals (Sweden)

    Suwicha Jirayucharoensak

    2014-01-01

    Full Text Available Automatic emotion recognition is one of the most challenging tasks. To detect emotion from nonstationary EEG signals, a sophisticated learning algorithm that can represent high-level abstraction is required. This study proposes the utilization of a deep learning network (DLN to discover unknown feature correlation between input signals that is crucial for the learning task. The DLN is implemented with a stacked autoencoder (SAE using hierarchical feature learning approach. Input features of the network are power spectral densities of 32-channel EEG signals from 32 subjects. To alleviate overfitting problem, principal component analysis (PCA is applied to extract the most important components of initial input features. Furthermore, covariate shift adaptation of the principal components is implemented to minimize the nonstationary effect of EEG signals. Experimental results show that the DLN is capable of classifying three different levels of valence and arousal with accuracy of 49.52% and 46.03%, respectively. Principal component based covariate shift adaptation enhances the respective classification accuracy by 5.55% and 6.53%. Moreover, DLN provides better performance compared to SVM and naive Bayes classifiers.

  8. A Multi-Model Approach for System Diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad; Bækgaard, Mikkel Ask Buur

    2007-01-01

    A multi-model approach for system diagnosis is presented in this paper. The relation with fault diagnosis as well as performance validation is considered. The approach is based on testing a number of pre-described models and find which one is the best. It is based on an active approach......,i.e. an auxiliary input to the system is applied. The multi-model approach is applied on a wind turbine system....

  9. A service component-based accounting and charging architecture to support interim mechanisms across multiple domains

    NARCIS (Netherlands)

    Le, M. van; Beijnum, B.J.F. van; Huitema, G.B.

    2004-01-01

    Today, telematics services are often compositions of different chargeable service components offered by different service providers. To enhance component-based accounting and charging, the service composition information is used to match with the corresponding charging structure of a service

  10. Covariance versus component based approaches for Green Supply Chain Management and Performance

    NARCIS (Netherlands)

    De Giovanni, P.; Esposito Vinzi, V.

    2012-01-01

    In this paper, we investigate the relationships between environmental management (EM) and performance to verify: whether the implementation of an effective internal environmental is a firm's precondition to belong to a green supply chain; which type of environmental practices (either internal or

  11. Spirituality and optimism: a holistic approach to component-based, self-management treatment for HIV.

    Science.gov (United States)

    Brown, Jordan; Hanson, Jan E; Schmotzer, Brian; Webel, Allison R

    2014-10-01

    For people living with HIV (PLWH), spirituality and optimism have a positive influence on their health, can slow HIV disease progression, and can improve quality of life. Our aim was to describe longitudinal changes in spirituality and optimism after participation in the SystemCHANGE™-HIV intervention. Upon completion of the intervention, participants experienced an 11.5 point increase in overall spiritual well-being (p = 0.036), a 6.3 point increase in religious well-being (p = 0.030), a 4.8 point increase in existential well-being (p = 0.125), and a 0.8 point increase in total optimism (p = 0.268) relative to controls. Our data suggest a group-based self-management intervention increases spiritual well-being in PLWH.

  12. Development of software for computing forming information using a component based approach

    Directory of Open Access Journals (Sweden)

    Kwang Hee Ko

    2009-12-01

    Full Text Available In shipbuilding industry, the manufacturing technology has advanced at an unprecedented pace for the last decade. As a result, many automatic systems for cutting, welding, etc. have been developed and employed in the manufacturing process and accordingly the productivity has been increased drastically. Despite such improvement in the manufacturing technology, however, development of an automatic system for fabricating a curved hull plate remains at the beginning stage since hardware and software for the automation of the curved hull fabrication process should be developed differently depending on the dimensions of plates, forming methods and manufacturing processes of each shipyard. To deal with this problem, it is necessary to create a “plug-in” framework, which can adopt various kinds of hardware and software to construct a full automatic fabrication system. In this paper, a framework for automatic fabrication of curved hull plates is proposed, which consists of four components and related software. In particular the software module for computing fabrication information is developed by using the ooCBD development methodology, which can interface with other hardware and software with minimum effort. Examples of the proposed framework applied to medium and large shipyards are presented.

  13. Internal flood analysis at Fermi 2 using a component-based frequency calculation approach

    International Nuclear Information System (INIS)

    Lin, J.C.; Hou, Y.M.; Ramirez, J.V.; Page, E.M.

    2004-01-01

    An analysis to identify potential accident sequences involving internal floods at Fermi Unit 2 was completed to fulfill the individual plant examination requirements. Floods can be significant core damage scenarios if they cause an initiating event and a common mode failure of critical systems. (author)

  14. Leveraging Existing Mission Tools in a Re-Usable, Component-Based Software Environment

    Science.gov (United States)

    Greene, Kevin; Grenander, Sven; Kurien, James; z,s (fshir. z[orttr); z,scer; O'Reilly, Taifun

    2006-01-01

    Emerging methods in component-based software development offer significant advantages but may seem incompatible with existing mission operations applications. In this paper we relate our positive experiences integrating existing mission applications into component-based tools we are delivering to three missions. In most operations environments, a number of software applications have been integrated together to form the mission operations software. In contrast, with component-based software development chunks of related functionality and data structures, referred to as components, can be individually delivered, integrated and re-used. With the advent of powerful tools for managing component-based development, complex software systems can potentially see significant benefits in ease of integration, testability and reusability from these techniques. These benefits motivate us to ask how component-based development techniques can be relevant in a mission operations environment, where there is significant investment in software tools that are not component-based and may not be written in languages for which component-based tools even exist. Trusted and complex software tools for sequencing, validation, navigation, and other vital functions cannot simply be re-written or abandoned in order to gain the advantages offered by emerging component-based software techniques. Thus some middle ground must be found. We have faced exactly this issue, and have found several solutions. Ensemble is an open platform for development, integration, and deployment of mission operations software that we are developing. Ensemble itself is an extension of an open source, component-based software development platform called Eclipse. Due to the advantages of component-based development, we have been able to vary rapidly develop mission operations tools for three surface missions by mixing and matching from a common set of mission operation components. We have also had to determine how to

  15. Computational and Game-Theoretic Approaches for Modeling Bounded Rationality

    NARCIS (Netherlands)

    L. Waltman (Ludo)

    2011-01-01

    textabstractThis thesis studies various computational and game-theoretic approaches to economic modeling. Unlike traditional approaches to economic modeling, the approaches studied in this thesis do not rely on the assumption that economic agents behave in a fully rational way. Instead, economic

  16. Time-invariant component-based normalization for a simultaneous PET-MR scanner.

    Science.gov (United States)

    Belzunce, M A; Reader, A J

    2016-05-07

    Component-based normalization is a method used to compensate for the sensitivity of each of the lines of response acquired in positron emission tomography. This method consists of modelling the sensitivity of each line of response as a product of multiple factors, which can be classified as time-invariant, time-variant and acquisition-dependent components. Typical time-variant factors are the intrinsic crystal efficiencies, which are needed to be updated by a regular normalization scan. Failure to do so would in principle generate artifacts in the reconstructed images due to the use of out of date time-variant factors. For this reason, an assessment of the variability and the impact of the crystal efficiencies in the reconstructed images is important to determine the frequency needed for the normalization scans, as well as to estimate the error obtained when an inappropriate normalization is used. Furthermore, if the fluctuations of these components are low enough, they could be neglected and nearly artifact-free reconstructions become achievable without performing a regular normalization scan. In this work, we analyse the impact of the time-variant factors in the component-based normalization used in the Biograph mMR scanner, but the work is applicable to other PET scanners. These factors are the intrinsic crystal efficiencies and the axial factors. For the latter, we propose a new method to obtain fixed axial factors that was validated with simulated data. Regarding the crystal efficiencies, we assessed their fluctuations during a period of 230 d and we found that they had good stability and low dispersion. We studied the impact of not including the intrinsic crystal efficiencies in the normalization when reconstructing simulated and real data. Based on this assessment and using the fixed axial factors, we propose the use of a time-invariant normalization that is able to achieve comparable results to the standard, daily updated, normalization factors used in this

  17. A Discrete Monetary Economic Growth Model with the MIU Approach

    Directory of Open Access Journals (Sweden)

    Wei-Bin Zhang

    2008-01-01

    Full Text Available This paper proposes an alternative approach to economic growth with money. The production side is the same as the Solow model, the Ramsey model, and the Tobin model. But we deal with behavior of consumers differently from the traditional approaches. The model is influenced by the money-in-the-utility (MIU approach in monetary economics. It provides a mechanism of endogenous saving which the Solow model lacks and avoids the assumption of adding up utility over a period of time upon which the Ramsey approach is based.

  18. Mathematical Modelling Approach in Mathematics Education

    Science.gov (United States)

    Arseven, Ayla

    2015-01-01

    The topic of models and modeling has come to be important for science and mathematics education in recent years. The topic of "Modeling" topic is especially important for examinations such as PISA which is conducted at an international level and measures a student's success in mathematics. Mathematical modeling can be defined as using…

  19. A Multivariate Approach to Functional Neuro Modeling

    DEFF Research Database (Denmark)

    Mørch, Niels J.S.

    1998-01-01

    by the application of linear and more flexible, nonlinear microscopic regression models to a real-world dataset. The dependency of model performance, as quantified by generalization error, on model flexibility and training set size is demonstrated, leading to the important realization that no uniformly optimal model......, provides the basis for a generalization theoretical framework relating model performance to model complexity and dataset size. Briefly summarized the major topics discussed in the thesis include: - An introduction of the representation of functional datasets by pairs of neuronal activity patterns...... exists. - Model visualization and interpretation techniques. The simplicity of this task for linear models contrasts the difficulties involved when dealing with nonlinear models. Finally, a visualization technique for nonlinear models is proposed. A single observation emerges from the thesis...

  20. Rival approaches to mathematical modelling in immunology

    Science.gov (United States)

    Andrew, Sarah M.; Baker, Christopher T. H.; Bocharov, Gennady A.

    2007-08-01

    In order to formulate quantitatively correct mathematical models of the immune system, one requires an understanding of immune processes and familiarity with a range of mathematical techniques. Selection of an appropriate model requires a number of decisions to be made, including a choice of the modelling objectives, strategies and techniques and the types of model considered as candidate models. The authors adopt a multidisciplinary perspective.

  1. A hybrid agent-based approach for modeling microbiological systems.

    Science.gov (United States)

    Guo, Zaiyi; Sloot, Peter M A; Tay, Joc Cing

    2008-11-21

    Models for systems biology commonly adopt Differential Equations or Agent-Based modeling approaches for simulating the processes as a whole. Models based on differential equations presuppose phenomenological intracellular behavioral mechanisms, while models based on Multi-Agent approach often use directly translated, and quantitatively less precise if-then logical rule constructs. We propose an extendible systems model based on a hybrid agent-based approach where biological cells are modeled as individuals (agents) while molecules are represented by quantities. This hybridization in entity representation entails a combined modeling strategy with agent-based behavioral rules and differential equations, thereby balancing the requirements of extendible model granularity with computational tractability. We demonstrate the efficacy of this approach with models of chemotaxis involving an assay of 10(3) cells and 1.2x10(6) molecules. The model produces cell migration patterns that are comparable to laboratory observations.

  2. Numerical modelling approach for mine backfill

    Indian Academy of Sciences (India)

    Muhammad Zaka Emad

    2017-07-24

    Jul 24, 2017 ... conditions. This paper discusses a numerical modelling strategy for modelling mine backfill material. The .... placed in an ore pass that leads the ore to the ore bin and crusher, from ... 1 year, depending on the mine plan.

  3. Uncertainty in biology a computational modeling approach

    CERN Document Server

    Gomez-Cabrero, David

    2016-01-01

    Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies.  Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process.  This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples.  This book is intended for graduate stude...

  4. OILMAP: A global approach to spill modeling

    International Nuclear Information System (INIS)

    Spaulding, M.L.; Howlett, E.; Anderson, E.; Jayko, K.

    1992-01-01

    OILMAP is an oil spill model system suitable for use in both rapid response mode and long-range contingency planning. It was developed for a personal computer and employs full-color graphics to enter data, set up spill scenarios, and view model predictions. The major components of OILMAP include environmental data entry and viewing capabilities, the oil spill models, and model prediction display capabilities. Graphic routines are provided for entering wind data, currents, and any type of geographically referenced data. Several modes of the spill model are available. The surface trajectory mode is intended for quick spill response. The weathering model includes the spreading, evaporation, entrainment, emulsification, and shoreline interaction of oil. The stochastic and receptor models simulate a large number of trajectories from a single site for generating probability statistics. Each model and the algorithms they use are described. Several additional capabilities are planned for OILMAP, including simulation of tactical spill response and subsurface oil transport. 8 refs

  5. Relaxed memory models: an operational approach

    OpenAIRE

    Boudol , Gérard; Petri , Gustavo

    2009-01-01

    International audience; Memory models define an interface between programs written in some language and their implementation, determining which behaviour the memory (and thus a program) is allowed to have in a given model. A minimal guarantee memory models should provide to the programmer is that well-synchronized, that is, data-race free code has a standard semantics. Traditionally, memory models are defined axiomatically, setting constraints on the order in which memory operations are allow...

  6. Modeling composting kinetics: A review of approaches

    NARCIS (Netherlands)

    Hamelers, H.V.M.

    2004-01-01

    Composting kinetics modeling is necessary to design and operate composting facilities that comply with strict market demands and tight environmental legislation. Current composting kinetics modeling can be characterized as inductive, i.e. the data are the starting point of the modeling process and

  7. Conformally invariant models: A new approach

    International Nuclear Information System (INIS)

    Fradkin, E.S.; Palchik, M.Ya.; Zaikin, V.N.

    1996-02-01

    A pair of mathematical models of quantum field theory in D dimensions is analyzed, particularly, a model of a charged scalar field defined by two generations of secondary fields in the space of even dimensions D>=4 and a model of a neutral scalar field defined by two generations of secondary fields in two-dimensional space. 6 refs

  8. A systemic approach to modelling of radiobiological effects

    International Nuclear Information System (INIS)

    Obaturov, G.M.

    1988-01-01

    Basic principles of the systemic approach to modelling of the radiobiological effects at different levels of cell organization have been formulated. The methodology is proposed for theoretical modelling of the effects at these levels

  9. Serpentinization reaction pathways: implications for modeling approach

    Energy Technology Data Exchange (ETDEWEB)

    Janecky, D.R.

    1986-01-01

    Experimental seawater-peridotite reaction pathways to form serpentinites at 300/sup 0/C, 500 bars, can be accurately modeled using the EQ3/6 codes in conjunction with thermodynamic and kinetic data from the literature and unpublished compilations. These models provide both confirmation of experimental interpretations and more detailed insight into hydrothermal reaction processes within the oceanic crust. The accuracy of these models depends on careful evaluation of the aqueous speciation model, use of mineral compositions that closely reproduce compositions in the experiments, and definition of realistic reactive components in terms of composition, thermodynamic data, and reaction rates.

  10. Consumer preference models: fuzzy theory approach

    Science.gov (United States)

    Turksen, I. B.; Wilson, I. A.

    1993-12-01

    Consumer preference models are widely used in new product design, marketing management, pricing and market segmentation. The purpose of this article is to develop and test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation) and how much to make (market share prediction).

  11. A visual approach for modeling spatiotemporal relations

    NARCIS (Netherlands)

    R.L. Guimarães (Rodrigo); C.S.S. Neto; L.F.G. Soares

    2008-01-01

    htmlabstractTextual programming languages have proven to be difficult to learn and to use effectively for many people. For this sake, visual tools can be useful to abstract the complexity of such textual languages, minimizing the specification efforts. In this paper we present a visual approach for

  12. PRODUCT TRIAL PROCESSING (PTP): A MODEL APPROACH ...

    African Journals Online (AJOL)

    Admin

    This study is a theoretical approach to consumer's processing of product trail, and equally explored ... consumer's first usage experience with a company's brand or product that is most important in determining ... product, what it is really marketing is the expected ..... confidence, thus there is a positive relationship between ...

  13. Nonlinear Modeling of the PEMFC Based On NNARX Approach

    OpenAIRE

    Shan-Jen Cheng; Te-Jen Chang; Kuang-Hsiung Tan; Shou-Ling Kuo

    2015-01-01

    Polymer Electrolyte Membrane Fuel Cell (PEMFC) is such a time-vary nonlinear dynamic system. The traditional linear modeling approach is hard to estimate structure correctly of PEMFC system. From this reason, this paper presents a nonlinear modeling of the PEMFC using Neural Network Auto-regressive model with eXogenous inputs (NNARX) approach. The multilayer perception (MLP) network is applied to evaluate the structure of the NNARX model of PEMFC. The validity and accurac...

  14. Development of a Conservative Model Validation Approach for Reliable Analysis

    Science.gov (United States)

    2015-01-01

    CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account

  15. Comparison of two novel approaches to model fibre reinforced concrete

    NARCIS (Netherlands)

    Radtke, F.K.F.; Simone, A.; Sluys, L.J.

    2009-01-01

    We present two approaches to model fibre reinforced concrete. In both approaches, discrete fibre distributions and the behaviour of the fibre-matrix interface are explicitly considered. One approach employs the reaction forces from fibre to matrix while the other is based on the partition of unity

  16. Modeling thrombin generation: plasma composition based approach.

    Science.gov (United States)

    Brummel-Ziedins, Kathleen E; Everse, Stephen J; Mann, Kenneth G; Orfeo, Thomas

    2014-01-01

    Thrombin has multiple functions in blood coagulation and its regulation is central to maintaining the balance between hemorrhage and thrombosis. Empirical and computational methods that capture thrombin generation can provide advancements to current clinical screening of the hemostatic balance at the level of the individual. In any individual, procoagulant and anticoagulant factor levels together act to generate a unique coagulation phenotype (net balance) that is reflective of the sum of its developmental, environmental, genetic, nutritional and pharmacological influences. Defining such thrombin phenotypes may provide a means to track disease progression pre-crisis. In this review we briefly describe thrombin function, methods for assessing thrombin dynamics as a phenotypic marker, computationally derived thrombin phenotypes versus determined clinical phenotypes, the boundaries of normal range thrombin generation using plasma composition based approaches and the feasibility of these approaches for predicting risk.

  17. A simple approach to modeling ductile failure.

    Energy Technology Data Exchange (ETDEWEB)

    Wellman, Gerald William

    2012-06-01

    Sandia National Laboratories has the need to predict the behavior of structures after the occurrence of an initial failure. In some cases determining the extent of failure, beyond initiation, is required, while in a few cases the initial failure is a design feature used to tailor the subsequent load paths. In either case, the ability to numerically simulate the initiation and propagation of failures is a highly desired capability. This document describes one approach to the simulation of failure initiation and propagation.

  18. A new approach for modeling composite materials

    Science.gov (United States)

    Alcaraz de la Osa, R.; Moreno, F.; Saiz, J. M.

    2013-03-01

    The increasing use of composite materials is due to their ability to tailor materials for special purposes, with applications evolving day by day. This is why predicting the properties of these systems from their constituents, or phases, has become so important. However, assigning macroscopical optical properties for these materials from the bulk properties of their constituents is not a straightforward task. In this research, we present a spectral analysis of three-dimensional random composite typical nanostructures using an Extension of the Discrete Dipole Approximation (E-DDA code), comparing different approaches and emphasizing the influences of optical properties of constituents and their concentration. In particular, we hypothesize a new approach that preserves the individual nature of the constituents introducing at the same time a variation in the optical properties of each discrete element that is driven by the surrounding medium. The results obtained with this new approach compare more favorably with the experiment than previous ones. We have also applied it to a non-conventional material composed of a metamaterial embedded in a dielectric matrix. Our version of the Discrete Dipole Approximation code, the EDDA code, has been formulated specifically to tackle this kind of problem, including materials with either magnetic and tensor properties.

  19. An Integrated Approach to Modeling Evacuation Behavior

    Science.gov (United States)

    2011-02-01

    A spate of recent hurricanes and other natural disasters have drawn a lot of attention to the evacuation decision of individuals. Here we focus on evacuation models that incorporate two economic phenomena that seem to be increasingly important in exp...

  20. Infectious disease modeling a hybrid system approach

    CERN Document Server

    Liu, Xinzhi

    2017-01-01

    This volume presents infectious diseases modeled mathematically, taking seasonality and changes in population behavior into account, using a switched and hybrid systems framework. The scope of coverage includes background on mathematical epidemiology, including classical formulations and results; a motivation for seasonal effects and changes in population behavior, an investigation into term-time forced epidemic models with switching parameters, and a detailed account of several different control strategies. The main goal is to study these models theoretically and to establish conditions under which eradication or persistence of the disease is guaranteed. In doing so, the long-term behavior of the models is determined through mathematical techniques from switched systems theory. Numerical simulations are also given to augment and illustrate the theoretical results and to help study the efficacy of the control schemes.

  1. On Combining Language Models: Oracle Approach

    National Research Council Canada - National Science Library

    Hacioglu, Kadri; Ward, Wayne

    2001-01-01

    In this paper, we address the of combining several language models (LMs). We find that simple interpolation methods, like log-linear and linear interpolation, improve the performance but fall short of the performance of an oracle...

  2. Advanced language modeling approaches, case study: Expert search

    NARCIS (Netherlands)

    Hiemstra, Djoerd

    2008-01-01

    This tutorial gives a clear and detailed overview of advanced language modeling approaches and tools, including the use of document priors, translation models, relevance models, parsimonious models and expectation maximization training. Expert search will be used as a case study to explain the

  3. Approaches to modelling hydrology and ecosystem interactions

    Science.gov (United States)

    Silberstein, Richard P.

    2014-05-01

    As the pressures of industry, agriculture and mining on groundwater resources increase there is a burgeoning un-met need to be able to capture these multiple, direct and indirect stresses in a formal framework that will enable better assessment of impact scenarios. While there are many catchment hydrological models and there are some models that represent ecological states and change (e.g. FLAMES, Liedloff and Cook, 2007), these have not been linked in any deterministic or substantive way. Without such coupled eco-hydrological models quantitative assessments of impacts from water use intensification on water dependent ecosystems under changing climate are difficult, if not impossible. The concept would include facility for direct and indirect water related stresses that may develop around mining and well operations, climate stresses, such as rainfall and temperature, biological stresses, such as diseases and invasive species, and competition such as encroachment from other competing land uses. Indirect water impacts could be, for example, a change in groundwater conditions has an impact on stream flow regime, and hence aquatic ecosystems. This paper reviews previous work examining models combining ecology and hydrology with a view to developing a conceptual framework linking a biophysically defensable model that combines ecosystem function with hydrology. The objective is to develop a model capable of representing the cumulative impact of multiple stresses on water resources and associated ecosystem function.

  4. Constructing a justice model based on Sen's capability approach

    OpenAIRE

    Yüksel, Sevgi; Yuksel, Sevgi

    2008-01-01

    The thesis provides a possible justice model based on Sen's capability approach. For this goal, we first analyze the general structure of a theory of justice, identifying the main variables and issues. Furthermore, based on Sen (2006) and Kolm (1998), we look at 'transcendental' and 'comparative' approaches to justice and concentrate on the sufficiency condition for the comparative approach. Then, taking Rawls' theory of justice as a starting point, we present how Sen's capability approach em...

  5. Challenges and opportunities for integrating lake ecosystem modelling approaches

    Science.gov (United States)

    Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.

    2010-01-01

    A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative

  6. An ontology-based approach for modelling architectural styles

    OpenAIRE

    Pahl, Claus; Giesecke, Simon; Hasselbring, Wilhelm

    2007-01-01

    peer-reviewed The conceptual modelling of software architectures is of central importance for the quality of a software system. A rich modelling language is required to integrate the different aspects of architecture modelling, such as architectural styles, structural and behavioural modelling, into a coherent framework.We propose an ontological approach for architectural style modelling based on description logic as an abstract, meta-level modelling instrument. Architect...

  7. Mathematical modelling a case studies approach

    CERN Document Server

    Illner, Reinhard; McCollum, Samantha; Roode, Thea van

    2004-01-01

    Mathematical modelling is a subject without boundaries. It is the means by which mathematics becomes useful to virtually any subject. Moreover, modelling has been and continues to be a driving force for the development of mathematics itself. This book explains the process of modelling real situations to obtain mathematical problems that can be analyzed, thus solving the original problem. The presentation is in the form of case studies, which are developed much as they would be in true applications. In many cases, an initial model is created, then modified along the way. Some cases are familiar, such as the evaluation of an annuity. Others are unique, such as the fascinating situation in which an engineer, armed only with a slide rule, had 24 hours to compute whether a valve would hold when a temporary rock plug was removed from a water tunnel. Each chapter ends with a set of exercises and some suggestions for class projects. Some projects are extensive, as with the explorations of the predator-prey model; oth...

  8. The simplified models approach to constraining supersymmetry

    Energy Technology Data Exchange (ETDEWEB)

    Perez, Genessis [Institut fuer Theoretische Physik, Karlsruher Institut fuer Technologie (KIT), Wolfgang-Gaede-Str. 1, 76131 Karlsruhe (Germany); Kulkarni, Suchita [Laboratoire de Physique Subatomique et de Cosmologie, Universite Grenoble Alpes, CNRS IN2P3, 53 Avenue des Martyrs, 38026 Grenoble (France)

    2015-07-01

    The interpretation of the experimental results at the LHC are model dependent, which implies that the searches provide limited constraints on scenarios such as supersymmetry (SUSY). The Simplified Models Spectra (SMS) framework used by ATLAS and CMS collaborations is useful to overcome this limitation. SMS framework involves a small number of parameters (all the properties are reduced to the mass spectrum, the production cross section and the branching ratio) and hence is more generic than presenting results in terms of soft parameters. In our work, the SMS framework was used to test Natural SUSY (NSUSY) scenario. To accomplish this task, two automated tools (SModelS and Fastlim) were used to decompose the NSUSY parameter space in terms of simplified models and confront the theoretical predictions against the experimental results. The achievement of both, just as the strengths and limitations, are here expressed for the NSUSY scenario.

  9. Lightweight approach to model traceability in a CASE tool

    Science.gov (United States)

    Vileiniskis, Tomas; Skersys, Tomas; Pavalkis, Saulius; Butleris, Rimantas; Butkiene, Rita

    2017-07-01

    A term "model-driven" is not at all a new buzzword within the ranks of system development community. Nevertheless, the ever increasing complexity of model-driven approaches keeps fueling all kinds of discussions around this paradigm and pushes researchers forward to research and develop new and more effective ways to system development. With the increasing complexity, model traceability, and model management as a whole, becomes indispensable activities of model-driven system development process. The main goal of this paper is to present a conceptual design and implementation of a practical lightweight approach to model traceability in a CASE tool.

  10. New approaches for modeling type Ia supernovae

    International Nuclear Information System (INIS)

    Zingale, Michael; Almgren, Ann S.; Bell, John B.; Day, Marcus S.; Rendleman, Charles A.; Woosley, Stan

    2007-01-01

    Type Ia supernovae (SNe Ia) are the largest thermonuclear explosions in the Universe. Their light output can be seen across great distances and has led to the discovery that the expansion rate of the Universe is accelerating. Despite the significance of SNe Ia, there are still a large number of uncertainties in current theoretical models. Computational modeling offers the promise to help answer the outstanding questions. However, even with today's supercomputers, such calculations are extremely challenging because of the wide range of length and timescales. In this paper, we discuss several new algorithms for simulations of SNe Ia and demonstrate some of their successes

  11. Chancroid transmission dynamics: a mathematical modeling approach.

    Science.gov (United States)

    Bhunu, C P; Mushayabasa, S

    2011-12-01

    Mathematical models have long been used to better understand disease transmission dynamics and how to effectively control them. Here, a chancroid infection model is presented and analyzed. The disease-free equilibrium is shown to be globally asymptotically stable when the reproduction number is less than unity. High levels of treatment are shown to reduce the reproduction number suggesting that treatment has the potential to control chancroid infections in any given community. This result is also supported by numerical simulations which show a decline in chancroid cases whenever the reproduction number is less than unity.

  12. A kinetic approach to magnetospheric modeling

    International Nuclear Information System (INIS)

    Whipple, E.C. Jr.

    1979-01-01

    The earth's magnetosphere is caused by the interaction between the flowing solar wind and the earth's magnetic dipole, with the distorted magnetic field in the outer parts of the magnetosphere due to the current systems resulting from this interaction. It is surprising that even the conceptually simple problem of the collisionless interaction of a flowing plasma with a dipole magnetic field has not been solved. A kinetic approach is essential if one is to take into account the dispersion of particles with different energies and pitch angles and the fact that particles on different trajectories have different histories and may come from different sources. Solving the interaction problem involves finding the various types of possible trajectories, populating them with particles appropriately, and then treating the electric and magnetic fields self-consistently with the resulting particle densities and currents. This approach is illustrated by formulating a procedure for solving the collisionless interaction problem on open field lines in the case of a slowly flowing magnetized plasma interacting with a magnetic dipole

  13. A kinetic approach to magnetospheric modeling

    Science.gov (United States)

    Whipple, E. C., Jr.

    1979-01-01

    The earth's magnetosphere is caused by the interaction between the flowing solar wind and the earth's magnetic dipole, with the distorted magnetic field in the outer parts of the magnetosphere due to the current systems resulting from this interaction. It is surprising that even the conceptually simple problem of the collisionless interaction of a flowing plasma with a dipole magnetic field has not been solved. A kinetic approach is essential if one is to take into account the dispersion of particles with different energies and pitch angles and the fact that particles on different trajectories have different histories and may come from different sources. Solving the interaction problem involves finding the various types of possible trajectories, populating them with particles appropriately, and then treating the electric and magnetic fields self-consistently with the resulting particle densities and currents. This approach is illustrated by formulating a procedure for solving the collisionless interaction problem on open field lines in the case of a slowly flowing magnetized plasma interacting with a magnetic dipole.

  14. Fractal approach to computer-analytical modelling of tree crown

    International Nuclear Information System (INIS)

    Berezovskaya, F.S.; Karev, G.P.; Kisliuk, O.F.; Khlebopros, R.G.; Tcelniker, Yu.L.

    1993-09-01

    In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs

  15. Prototypic implementations of the building block for component based open Hypermedia systems (BB/CB-OHSs)

    DEFF Research Database (Denmark)

    Mohamed, Omer I. Eldai

    2005-01-01

    In this paper we describe the prototypic implementations of the BuildingBlock (BB/CB-OHSs) that proposed to address some of the Component-based Open Hypermedia Systems (CB-OHSs) issues, including distribution and interoperability [4, 11, 12]. Four service implementations were described below. The...

  16. A Service Component-based Accounting and Charging Architecture to Support Interim Mechanisms across Multiple Domains

    NARCIS (Netherlands)

    Le, V.M.; van Beijnum, Bernhard J.F.; Huitema, G.B.

    Today, telematics services are o Aen compositions of different chargeable service components offered by different service providers. To enhance component-based accounting and charging, the service composition information is used to match with the corresponding charging structure of a service

  17. A novel approach to modeling atmospheric convection

    Science.gov (United States)

    Goodman, A.

    2016-12-01

    The inadequate representation of clouds continues to be a large source of uncertainty in the projections from global climate models (GCMs). With continuous advances in computational power, however, the ability for GCMs to explicitly resolve cumulus convection will soon be realized. For this purpose, Jung and Arakawa (2008) proposed the Vector Vorticity Model (VVM), in which vorticity is the predicted variable instead of momentum. This has the advantage of eliminating the pressure gradient force within the framework of an anelastic system. However, the VVM was designed for use on a planar quadrilateral grid, making it unsuitable for implementation in global models discretized on the sphere. Here we have proposed a modification to the VVM where instead the curl of the horizontal vorticity is the primary predicted variable. This allows us to maintain the benefits of the original VVM while working within the constraints of a non-quadrilateral mesh. We found that our proposed model produced results from a warm bubble simulation that were consistent with the VVM. Further improvements that can be made to the VVM are also discussed.

  18. INDIVIDUAL BASED MODELLING APPROACH TO THERMAL ...

    Science.gov (United States)

    Diadromous fish populations in the Pacific Northwest face challenges along their migratory routes from declining habitat quality, harvest, and barriers to longitudinal connectivity. Changes in river temperature regimes are producing an additional challenge for upstream migrating adult salmon and steelhead, species that are sensitive to absolute and cumulative thermal exposure. Adult salmon populations have been shown to utilize cold water patches along migration routes when mainstem river temperatures exceed thermal optimums. We are employing an individual based model (IBM) to explore the costs and benefits of spatially-distributed cold water refugia for adult migrating salmon. Our model, developed in the HexSim platform, is built around a mechanistic behavioral decision tree that drives individual interactions with their spatially explicit simulated environment. Population-scale responses to dynamic thermal regimes, coupled with other stressors such as disease and harvest, become emergent properties of the spatial IBM. Other model outputs include arrival times, species-specific survival rates, body energetic content, and reproductive fitness levels. Here, we discuss the challenges associated with parameterizing an individual based model of salmon and steelhead in a section of the Columbia River. Many rivers and streams in the Pacific Northwest are currently listed as impaired under the Clean Water Act as a result of high summer water temperatures. Adverse effec

  19. A new approach to model mixed hydrates

    Czech Academy of Sciences Publication Activity Database

    Hielscher, S.; Vinš, Václav; Jäger, A.; Hrubý, Jan; Breitkopf, C.; Span, R.

    2018-01-01

    Roč. 459, March (2018), s. 170-185 ISSN 0378-3812 R&D Projects: GA ČR(CZ) GA17-08218S Institutional support: RVO:61388998 Keywords : gas hydrate * mixture * modeling Subject RIV: BJ - Thermodynamics Impact factor: 2.473, year: 2016 https://www.sciencedirect.com/science/article/pii/S0378381217304983

  20. Energy and development : A modelling approach

    NARCIS (Netherlands)

    van Ruijven, B.J.|info:eu-repo/dai/nl/304834521

    2008-01-01

    Rapid economic growth of developing countries like India and China implies that these countries become important actors in the global energy system. Examples of this impact are the present day oil shortages and rapidly increasing emissions of greenhouse gases. Global energy models are used explore

  1. Modeling Approaches for Describing Microbial Population Heterogeneity

    DEFF Research Database (Denmark)

    Lencastre Fernandes, Rita

    environmental conditions. Three cases are presented and discussed in this thesis. Common to all is the use of S. cerevisiae as model organism, and the use of cell size and cell cycle position as single-cell descriptors. The first case focuses on the experimental and mathematical description of a yeast...

  2. Energy and Development. A Modelling Approach

    International Nuclear Information System (INIS)

    Van Ruijven, B.J.

    2008-01-01

    Rapid economic growth of developing countries like India and China implies that these countries become important actors in the global energy system. Examples of this impact are the present day oil shortages and rapidly increasing emissions of greenhouse gases. Global energy models are used to explore possible future developments of the global energy system and identify policies to prevent potential problems. Such estimations of future energy use in developing countries are very uncertain. Crucial factors in the future energy use of these regions are electrification, urbanisation and income distribution, issues that are generally not included in present day global energy models. Model simulations in this thesis show that current insight in developments in low-income regions lead to a wide range of expected energy use in 2030 of the residential and transport sectors. This is mainly caused by many different model calibration options that result from the limited data availability for model development and calibration. We developed a method to identify the impact of model calibration uncertainty on future projections. We developed a new model for residential energy use in India, in collaboration with the Indian Institute of Science. Experiments with this model show that the impact of electrification and income distribution is less univocal than often assumed. The use of fuelwood, with related health risks, can decrease rapidly if the income of poor groups increases. However, there is a trade off in terms of CO2 emissions because these groups gain access to electricity and the ownership of appliances increases. Another issue is the potential role of new technologies in developing countries: will they use the opportunities of leapfrogging? We explored the potential role of hydrogen, an energy carrier that might play a central role in a sustainable energy system. We found that hydrogen only plays a role before 2050 under very optimistic assumptions. Regional energy

  3. Integration models: multicultural and liberal approaches confronted

    Science.gov (United States)

    Janicki, Wojciech

    2012-01-01

    European societies have been shaped by their Christian past, upsurge of international migration, democratic rule and liberal tradition rooted in religious tolerance. Boosting globalization processes impose new challenges on European societies, striving to protect their diversity. This struggle is especially clearly visible in case of minorities trying to resist melting into mainstream culture. European countries' legal systems and cultural policies respond to these efforts in many ways. Respecting identity politics-driven group rights seems to be the most common approach, resulting in creation of a multicultural society. However, the outcome of respecting group rights may be remarkably contradictory to both individual rights growing out from liberal tradition, and to reinforced concept of integration of immigrants into host societies. The hereby paper discusses identity politics upturn in the context of both individual rights and integration of European societies.

  4. Modelling thermal plume impacts - Kalpakkam approach

    International Nuclear Information System (INIS)

    Rao, T.S.; Anup Kumar, B.; Narasimhan, S.V.

    2002-01-01

    A good understanding of temperature patterns in the receiving waters is essential to know the heat dissipation from thermal plumes originating from coastal power plants. The seasonal temperature profiles of the Kalpakkam coast near Madras Atomic Power Station (MAPS) thermal out fall site are determined and analysed. It is observed that the seasonal current reversal in the near shore zone is one of the major mechanisms for the transport of effluents away from the point of mixing. To further refine our understanding of the mixing and dilution processes, it is necessary to numerically simulate the coastal ocean processes by parameterising the key factors concerned. In this paper, we outline the experimental approach to achieve this objective. (author)

  5. Dynamic Metabolic Model Building Based on the Ensemble Modeling Approach

    Energy Technology Data Exchange (ETDEWEB)

    Liao, James C. [Univ. of California, Los Angeles, CA (United States)

    2016-10-01

    Ensemble modeling of kinetic systems addresses the challenges of kinetic model construction, with respect to parameter value selection, and still allows for the rich insights possible from kinetic models. This project aimed to show that constructing, implementing, and analyzing such models is a useful tool for the metabolic engineering toolkit, and that they can result in actionable insights from models. Key concepts are developed and deliverable publications and results are presented.

  6. Nuclear security assessment with Markov model approach

    International Nuclear Information System (INIS)

    Suzuki, Mitsutoshi; Terao, Norichika

    2013-01-01

    Nuclear security risk assessment with the Markov model based on random event is performed to explore evaluation methodology for physical protection in nuclear facilities. Because the security incidences are initiated by malicious and intentional acts, expert judgment and Bayes updating are used to estimate scenario and initiation likelihood, and it is assumed that the Markov model derived from stochastic process can be applied to incidence sequence. Both an unauthorized intrusion as Design Based Threat (DBT) and a stand-off attack as beyond-DBT are assumed to hypothetical facilities, and performance of physical protection and mitigation and minimization of consequence are investigated to develop the assessment methodology in a semi-quantitative manner. It is shown that cooperation between facility operator and security authority is important to respond to the beyond-DBT incidence. (author)

  7. An Approach for Modeling Supplier Resilience

    Science.gov (United States)

    2016-04-30

    interests include resilience modeling of supply chains, reliability engineering, and meta- heuristic optimization. [m.hosseini@ou.edu] Abstract...be availability , or the extent to which the products produced by the supply chain are available for use (measured as a ratio of uptime to total time...of the use of the product). Available systems are important in many industries, particularly in the Department of Defense, where weapons systems

  8. Tumour resistance to cisplatin: a modelling approach

    International Nuclear Information System (INIS)

    Marcu, L; Bezak, E; Olver, I; Doorn, T van

    2005-01-01

    Although chemotherapy has revolutionized the treatment of haematological tumours, in many common solid tumours the success has been limited. Some of the reasons for the limitations are: the timing of drug delivery, resistance to the drug, repopulation between cycles of chemotherapy and the lack of complete understanding of the pharmacokinetics and pharmacodynamics of a specific agent. Cisplatin is among the most effective cytotoxic agents used in head and neck cancer treatments. When modelling cisplatin as a single agent, the properties of cisplatin only have to be taken into account, reducing the number of assumptions that are considered in the generalized chemotherapy models. The aim of the present paper is to model the biological effect of cisplatin and to simulate the consequence of cisplatin resistance on tumour control. The 'treated' tumour is a squamous cell carcinoma of the head and neck, previously grown by computer-based Monte Carlo techniques. The model maintained the biological constitution of a tumour through the generation of stem cells, proliferating cells and non-proliferating cells. Cell kinetic parameters (mean cell cycle time, cell loss factor, thymidine labelling index) were also consistent with the literature. A sensitivity study on the contribution of various mechanisms leading to drug resistance is undertaken. To quantify the extent of drug resistance, the cisplatin resistance factor (CRF) is defined as the ratio between the number of surviving cells of the resistant population and the number of surviving cells of the sensitive population, determined after the same treatment time. It is shown that there is a supra-linear dependence of CRF on the percentage of cisplatin-DNA adducts formed, and a sigmoid-like dependence between CRF and the percentage of cells killed in resistant tumours. Drug resistance is shown to be a cumulative process which eventually can overcome tumour regression leading to treatment failure

  9. Tumour resistance to cisplatin: a modelling approach

    Energy Technology Data Exchange (ETDEWEB)

    Marcu, L [School of Chemistry and Physics, University of Adelaide, North Terrace, SA 5000 (Australia); Bezak, E [School of Chemistry and Physics, University of Adelaide, North Terrace, SA 5000 (Australia); Olver, I [Faculty of Medicine, University of Adelaide, North Terrace, SA 5000 (Australia); Doorn, T van [School of Chemistry and Physics, University of Adelaide, North Terrace, SA 5000 (Australia)

    2005-01-07

    Although chemotherapy has revolutionized the treatment of haematological tumours, in many common solid tumours the success has been limited. Some of the reasons for the limitations are: the timing of drug delivery, resistance to the drug, repopulation between cycles of chemotherapy and the lack of complete understanding of the pharmacokinetics and pharmacodynamics of a specific agent. Cisplatin is among the most effective cytotoxic agents used in head and neck cancer treatments. When modelling cisplatin as a single agent, the properties of cisplatin only have to be taken into account, reducing the number of assumptions that are considered in the generalized chemotherapy models. The aim of the present paper is to model the biological effect of cisplatin and to simulate the consequence of cisplatin resistance on tumour control. The 'treated' tumour is a squamous cell carcinoma of the head and neck, previously grown by computer-based Monte Carlo techniques. The model maintained the biological constitution of a tumour through the generation of stem cells, proliferating cells and non-proliferating cells. Cell kinetic parameters (mean cell cycle time, cell loss factor, thymidine labelling index) were also consistent with the literature. A sensitivity study on the contribution of various mechanisms leading to drug resistance is undertaken. To quantify the extent of drug resistance, the cisplatin resistance factor (CRF) is defined as the ratio between the number of surviving cells of the resistant population and the number of surviving cells of the sensitive population, determined after the same treatment time. It is shown that there is a supra-linear dependence of CRF on the percentage of cisplatin-DNA adducts formed, and a sigmoid-like dependence between CRF and the percentage of cells killed in resistant tumours. Drug resistance is shown to be a cumulative process which eventually can overcome tumour regression leading to treatment failure.

  10. ISM Approach to Model Offshore Outsourcing Risks

    Directory of Open Access Journals (Sweden)

    Sunand Kumar

    2014-07-01

    Full Text Available In an effort to achieve a competitive advantage via cost reductions and improved market responsiveness, organizations are increasingly employing offshore outsourcing as a major component of their supply chain strategies. But as evident from literature number of risks such as Political risk, Risk due to cultural differences, Compliance and regulatory risk, Opportunistic risk and Organization structural risk, which adversely affect the performance of offshore outsourcing in a supply chain network. This also leads to dissatisfaction among different stake holders. The main objective of this paper is to identify and understand the mutual interaction among various risks which affect the performance of offshore outsourcing.  To this effect, authors have identified various risks through extant review of literature.  From this information, an integrated model using interpretive structural modelling (ISM for risks affecting offshore outsourcing is developed and the structural relationships between these risks are modeled.  Further, MICMAC analysis is done to analyze the driving power and dependency of risks which shall be helpful to managers to identify and classify important criterions and to reveal the direct and indirect effects of each criterion on offshore outsourcing. Results show that political risk and risk due to cultural differences are act as strong drivers.

  11. Remote sensing approach to structural modelling

    International Nuclear Information System (INIS)

    El Ghawaby, M.A.

    1989-01-01

    Remote sensing techniques are quite dependable tools in investigating geologic problems, specially those related to structural aspects. The Landsat imagery provides discrimination between rock units, detection of large scale structures as folds and faults, as well as small scale fabric elements such as foliation and banding. In order to fulfill the aim of geologic application of remote sensing, some essential surveying maps might be done from images prior to the structural interpretation: land-use, land-form drainage pattern, lithological unit and structural lineament maps. Afterwards, the field verification should lead to interpretation of a comprehensive structural model of the study area to apply for the target problem. To deduce such a model, there are two ways of analysis the interpreter may go through: the direct and the indirect methods. The direct one is needed in cases where the resources or the targets are controlled by an obvious or exposed structural element or pattern. The indirect way is necessary for areas where the target is governed by a complicated structural pattern. Some case histories of structural modelling methods applied successfully for exploration of radioactive minerals, iron deposits and groundwater aquifers in Egypt are presented. The progress in imagery, enhancement and integration of remote sensing data with the other geophysical and geochemical data allow a geologic interpretation to be carried out which become better than that achieved with either of the individual data sets. 9 refs

  12. A moving approach for the Vector Hysteron Model

    Energy Technology Data Exchange (ETDEWEB)

    Cardelli, E. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Faba, A., E-mail: antonio.faba@unipg.it [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Laudani, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy); Quondam Antonio, S. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Riganti Fulginei, F.; Salvini, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy)

    2016-04-01

    A moving approach for the VHM (Vector Hysteron Model) is here described, to reconstruct both scalar and rotational magnetization of electrical steels with weak anisotropy, such as the non oriented grain Silicon steel. The hysterons distribution is postulated to be function of the magnetization state of the material, in order to overcome the practical limitation of the congruency property of the standard VHM approach. By using this formulation and a suitable accommodation procedure, the results obtained indicate that the model is accurate, in particular in reproducing the experimental behavior approaching to the saturation region, allowing a real improvement respect to the previous approach.

  13. Agribusiness model approach to territorial food development

    Directory of Open Access Journals (Sweden)

    Murcia Hector Horacio

    2011-04-01

    Full Text Available

    Several research efforts have coordinated the academic program of Agricultural Business Management from the University De La Salle (Bogota D.C., to the design and implementation of a sustainable agribusiness model applied to food development, with territorial projection. Rural development is considered as a process that aims to improve the current capacity and potential of the inhabitant of the sector, which refers not only to production levels and productivity of agricultural items. It takes into account the guidelines of the Organization of the United Nations “Millennium Development Goals” and considered the concept of sustainable food and agriculture development, including food security and nutrition in an integrated interdisciplinary context, with holistic and systemic dimension. Analysis is specified by a model with an emphasis on sustainable agribusiness production chains related to agricultural food items in a specific region. This model was correlated with farm (technical objectives, family (social purposes and community (collective orientations projects. Within this dimension are considered food development concepts and methodologies of Participatory Action Research (PAR. Finally, it addresses the need to link the results to low-income communities, within the concepts of the “new rurality”.

  14. Engineering approach to modeling of piled systems

    International Nuclear Information System (INIS)

    Coombs, R.F.; Silva, M.A.G. da

    1980-01-01

    Available methods of analysis of piled systems subjected to dynamic excitation invade areas of mathematics usually beyond the reach of a practising engineer. A simple technique that avoids that conflict is proposed, at least for preliminary studies, and its application, compared with other methods, is shown to be satisfactory. A corrective factor for parameters currently used to represent transmitting boundaries is derived for a finite strip that models an infinite layer. The influence of internal damping on the dynamic stiffness of the layer and on radiation damping is analysed. (Author) [pt

  15. Jackiw-Pi model: A superfield approach

    Science.gov (United States)

    Gupta, Saurabh

    2014-12-01

    We derive the off-shell nilpotent and absolutely anticommuting Becchi-Rouet-Stora-Tyutin (BRST) as well as anti-BRST transformations s ( a) b corresponding to the Yang-Mills gauge transformations of 3D Jackiw-Pi model by exploiting the "augmented" super-field formalism. We also show that the Curci-Ferrari restriction, which is a hallmark of any non-Abelian 1-form gauge theories, emerges naturally within this formalism and plays an instrumental role in providing the proof of absolute anticommutativity of s ( a) b .

  16. Applied Regression Modeling A Business Approach

    CERN Document Server

    Pardoe, Iain

    2012-01-01

    An applied and concise treatment of statistical regression techniques for business students and professionals who have little or no background in calculusRegression analysis is an invaluable statistical methodology in business settings and is vital to model the relationship between a response variable and one or more predictor variables, as well as the prediction of a response value given values of the predictors. In view of the inherent uncertainty of business processes, such as the volatility of consumer spending and the presence of market uncertainty, business professionals use regression a

  17. Implicit moral evaluations: A multinomial modeling approach.

    Science.gov (United States)

    Cameron, C Daryl; Payne, B Keith; Sinnott-Armstrong, Walter; Scheffer, Julian A; Inzlicht, Michael

    2017-01-01

    Implicit moral evaluations-i.e., immediate, unintentional assessments of the wrongness of actions or persons-play a central role in supporting moral behavior in everyday life. Yet little research has employed methods that rigorously measure individual differences in implicit moral evaluations. In five experiments, we develop a new sequential priming measure-the Moral Categorization Task-and a multinomial model that decomposes judgment on this task into multiple component processes. These include implicit moral evaluations of moral transgression primes (Unintentional Judgment), accurate moral judgments about target actions (Intentional Judgment), and a directional tendency to judge actions as morally wrong (Response Bias). Speeded response deadlines reduced Intentional Judgment but not Unintentional Judgment (Experiment 1). Unintentional Judgment was stronger toward moral transgression primes than non-moral negative primes (Experiments 2-4). Intentional Judgment was associated with increased error-related negativity, a neurophysiological indicator of behavioral control (Experiment 4). Finally, people who voted for an anti-gay marriage amendment had stronger Unintentional Judgment toward gay marriage primes (Experiment 5). Across Experiments 1-4, implicit moral evaluations converged with moral personality: Unintentional Judgment about wrong primes, but not negative primes, was negatively associated with psychopathic tendencies and positively associated with moral identity and guilt proneness. Theoretical and practical applications of formal modeling for moral psychology are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Modeling Saturn's Inner Plasmasphere: Cassini's Closest Approach

    Science.gov (United States)

    Moore, L.; Mendillo, M.

    2005-05-01

    Ion densities from the three-dimensional Saturn-Thermosphere-Ionosphere-Model (STIM, Moore et al., 2004) are extended above the plasma exobase using the formalism of Pierrard and Lemaire (1996, 1998), which evaluates the balance of gravitational, centrifugal and electric forces on the plasma. The parameter space of low-energy ionospheric contributions to Saturn's plasmasphere is explored by comparing results that span the observed extremes of plasma temperature, 650 K to 1700 K, and a range of velocity distributions, Lorentzian (or Kappa) to Maxwellian. Calculations are made for plasma densities along the path of the Cassini spacecraft's orbital insertion on 1 July 2004. These calculations neglect any ring or satellite sources of plasma, which are most likely minor contributors at 1.3 Saturn radii. Modeled densities will be compared with Cassini measurements as they become available. Moore, L.E., M. Mendillo, I.C.F. Mueller-Wodarg, and D.L. Murr, Icarus, 172, 503-520, 2004. Pierrard, V. and J. Lemaire, J. Geophys. Res., 101, 7923-7934, 1996. Pierrard, V. and J. Lemaire, J. Geophys. Res., 103, 4117, 1998.

  19. Keyring models: An approach to steerability

    Science.gov (United States)

    Miller, Carl A.; Colbeck, Roger; Shi, Yaoyun

    2018-02-01

    If a measurement is made on one half of a bipartite system, then, conditioned on the outcome, the other half has a new reduced state. If these reduced states defy classical explanation—that is, if shared randomness cannot produce these reduced states for all possible measurements—the bipartite state is said to be steerable. Determining which states are steerable is a challenging problem even for low dimensions. In the case of two-qubit systems, a criterion is known for T-states (that is, those with maximally mixed marginals) under projective measurements. In the current work, we introduce the concept of keyring models—a special class of local hidden state models. When the measurements made correspond to real projectors, these allow us to study steerability beyond T-states. Using keyring models, we completely solve the steering problem for real projective measurements when the state arises from mixing a pure two-qubit state with uniform noise. We also give a partial solution in the case when the uniform noise is replaced by independent depolarizing channels.

  20. Mathematical Modeling in Mathematics Education: Basic Concepts and Approaches

    Science.gov (United States)

    Erbas, Ayhan Kürsat; Kertil, Mahmut; Çetinkaya, Bülent; Çakiroglu, Erdinç; Alacaci, Cengiz; Bas, Sinem

    2014-01-01

    Mathematical modeling and its role in mathematics education have been receiving increasing attention in Turkey, as in many other countries. The growing body of literature on this topic reveals a variety of approaches to mathematical modeling and related concepts, along with differing perspectives on the use of mathematical modeling in teaching and…

  1. A BEHAVIORAL-APPROACH TO LINEAR EXACT MODELING

    NARCIS (Netherlands)

    ANTOULAS, AC; WILLEMS, JC

    1993-01-01

    The behavioral approach to system theory provides a parameter-free framework for the study of the general problem of linear exact modeling and recursive modeling. The main contribution of this paper is the solution of the (continuous-time) polynomial-exponential time series modeling problem. Both

  2. A modular approach to numerical human body modeling

    NARCIS (Netherlands)

    Forbes, P.A.; Griotto, G.; Rooij, L. van

    2007-01-01

    The choice of a human body model for a simulated automotive impact scenario must take into account both accurate model response and computational efficiency as key factors. This study presents a "modular numerical human body modeling" approach which allows the creation of a customized human body

  3. A Bayesian approach for quantification of model uncertainty

    International Nuclear Information System (INIS)

    Park, Inseok; Amarchinta, Hemanth K.; Grandhi, Ramana V.

    2010-01-01

    In most engineering problems, more than one model can be created to represent an engineering system's behavior. Uncertainty is inevitably involved in selecting the best model from among the models that are possible. Uncertainty in model selection cannot be ignored, especially when the differences between the predictions of competing models are significant. In this research, a methodology is proposed to quantify model uncertainty using measured differences between experimental data and model outcomes under a Bayesian statistical framework. The adjustment factor approach is used to propagate model uncertainty into prediction of a system response. A nonlinear vibration system is used to demonstrate the processes for implementing the adjustment factor approach. Finally, the methodology is applied on the engineering benefits of a laser peening process, and a confidence band for residual stresses is established to indicate the reliability of model prediction.

  4. A Networks Approach to Modeling Enzymatic Reactions.

    Science.gov (United States)

    Imhof, P

    2016-01-01

    Modeling enzymatic reactions is a demanding task due to the complexity of the system, the many degrees of freedom involved and the complex, chemical, and conformational transitions associated with the reaction. Consequently, enzymatic reactions are not determined by precisely one reaction pathway. Hence, it is beneficial to obtain a comprehensive picture of possible reaction paths and competing mechanisms. By combining individually generated intermediate states and chemical transition steps a network of such pathways can be constructed. Transition networks are a discretized representation of a potential energy landscape consisting of a multitude of reaction pathways connecting the end states of the reaction. The graph structure of the network allows an easy identification of the energetically most favorable pathways as well as a number of alternative routes. © 2016 Elsevier Inc. All rights reserved.

  5. Carbonate rock depositional models: A microfacies approach

    Energy Technology Data Exchange (ETDEWEB)

    Carozzi, A.V.

    1988-01-01

    Carbonate rocks contain more than 50% by weight carbonate minerals such as calcite, dolomite, and siderite. Understanding how these rocks form can lead to more efficient methods of petroleum exploration. Micofacies analysis techniques can be used as a method of predicting models of sedimentation for carbonate rocks. Micofacies in carbonate rocks can be seen clearly only in thin sections under a microscope. This section analysis of carbonate rocks is a tool that can be used to understand depositional environments, diagenetic evolution of carbonate rocks, and the formation of porosity and permeability in carbonate rocks. The use of micofacies analysis techniques is applied to understanding the origin and formation of carbonate ramps, carbonate platforms, and carbonate slopes and basins. This book will be of interest to students and professionals concerned with the disciplines of sedimentary petrology, sedimentology, petroleum geology, and palentology.

  6. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  7. A dual model approach to ground water recovery trench design

    International Nuclear Information System (INIS)

    Clodfelter, C.L.; Crouch, M.S.

    1992-01-01

    The design of trenches for contaminated ground water recovery must consider several variables. This paper presents a dual-model approach for effectively recovering contaminated ground water migrating toward a trench by advection. The approach involves an analytical model to determine the vertical influence of the trench and a numerical flow model to determine the capture zone within the trench and the surrounding aquifer. The analytical model is utilized by varying trench dimensions and head values to design a trench which meets the remediation criteria. The numerical flow model is utilized to select the type of backfill and location of sumps within the trench. The dual-model approach can be used to design a recovery trench which effectively captures advective migration of contaminants in the vertical and horizontal planes

  8. Virtuous organization: A structural equation modeling approach

    Directory of Open Access Journals (Sweden)

    Majid Zamahani

    2013-02-01

    Full Text Available For years, the idea of virtue was unfavorable among researchers and virtues were traditionally considered as culture-specific, relativistic and they were supposed to be associated with social conservatism, religious or moral dogmatism, and scientific irrelevance. Virtue and virtuousness have been recently considered seriously among organizational researchers. The proposed study of this paper examines the relationships between leadership, organizational culture, human resource, structure and processes, care for community and virtuous organization. Structural equation modeling is employed to investigate the effects of each variable on other components. The data used in this study consists of questionnaire responses from employees in Payam e Noor University in Yazd province. A total of 250 questionnaires were sent out and a total of 211 valid responses were received. Our results have revealed that all the five variables have positive and significant impacts on virtuous organization. Among the five variables, organizational culture has the most direct impact (0.80 and human resource has the most total impact (0.844 on virtuous organization.

  9. A systemic approach for modeling soil functions

    Science.gov (United States)

    Vogel, Hans-Jörg; Bartke, Stephan; Daedlow, Katrin; Helming, Katharina; Kögel-Knabner, Ingrid; Lang, Birgit; Rabot, Eva; Russell, David; Stößel, Bastian; Weller, Ulrich; Wiesmeier, Martin; Wollschläger, Ute

    2018-03-01

    The central importance of soil for the functioning of terrestrial systems is increasingly recognized. Critically relevant for water quality, climate control, nutrient cycling and biodiversity, soil provides more functions than just the basis for agricultural production. Nowadays, soil is increasingly under pressure as a limited resource for the production of food, energy and raw materials. This has led to an increasing demand for concepts assessing soil functions so that they can be adequately considered in decision-making aimed at sustainable soil management. The various soil science disciplines have progressively developed highly sophisticated methods to explore the multitude of physical, chemical and biological processes in soil. It is not obvious, however, how the steadily improving insight into soil processes may contribute to the evaluation of soil functions. Here, we present to a new systemic modeling framework that allows for a consistent coupling between reductionist yet observable indicators for soil functions with detailed process understanding. It is based on the mechanistic relationships between soil functional attributes, each explained by a network of interacting processes as derived from scientific evidence. The non-linear character of these interactions produces stability and resilience of soil with respect to functional characteristics. We anticipate that this new conceptional framework will integrate the various soil science disciplines and help identify important future research questions at the interface between disciplines. It allows the overwhelming complexity of soil systems to be adequately coped with and paves the way for steadily improving our capability to assess soil functions based on scientific understanding.

  10. Modeling of phase equilibria with CPA using the homomorph approach

    DEFF Research Database (Denmark)

    Breil, Martin Peter; Tsivintzelis, Ioannis; Kontogeorgis, Georgios

    2011-01-01

    For association models, like CPA and SAFT, a classical approach is often used for estimating pure-compound and mixture parameters. According to this approach, the pure-compound parameters are estimated from vapor pressure and liquid density data. Then, the binary interaction parameters, kij, are ...

  11. Modular Modelling and Simulation Approach - Applied to Refrigeration Systems

    DEFF Research Database (Denmark)

    Sørensen, Kresten Kjær; Stoustrup, Jakob

    2008-01-01

    This paper presents an approach to modelling and simulation of the thermal dynamics of a refrigeration system, specifically a reefer container. A modular approach is used and the objective is to increase the speed and flexibility of the developed simulation environment. The refrigeration system...

  12. A Constructive Neural-Network Approach to Modeling Psychological Development

    Science.gov (United States)

    Shultz, Thomas R.

    2012-01-01

    This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…

  13. The Intersystem Model of Psychotherapy: An Integrated Systems Treatment Approach

    Science.gov (United States)

    Weeks, Gerald R.; Cross, Chad L.

    2004-01-01

    This article introduces the intersystem model of psychotherapy and discusses its utility as a truly integrative and comprehensive approach. The foundation of this conceptually complex approach comes from dialectic metatheory; hence, its derivation requires an understanding of both foundational and integrational constructs. The article provides a…

  14. Bystander Approaches: Empowering Students to Model Ethical Sexual Behavior

    Science.gov (United States)

    Lynch, Annette; Fleming, Wm. Michael

    2005-01-01

    Sexual violence on college campuses is well documented. Prevention education has emerged as an alternative to victim-- and perpetrator--oriented approaches used in the past. One sexual violence prevention education approach focuses on educating and empowering the bystander to become a point of ethical intervention. In this model, bystanders to…

  15. Modelling road accidents: An approach using structural time series

    Science.gov (United States)

    Junus, Noor Wahida Md; Ismail, Mohd Tahir

    2014-09-01

    In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.

  16. Numerical approaches to expansion process modeling

    Directory of Open Access Journals (Sweden)

    G. V. Alekseev

    2017-01-01

    Full Text Available Forage production is currently undergoing a period of intensive renovation and introduction of the most advanced technologies and equipment. More and more often such methods as barley toasting, grain extrusion, steaming and grain flattening, boiling bed explosion, infrared ray treatment of cereals and legumes, followed by flattening, and one-time or two-time granulation of the purified whole grain without humidification in matrix presses By grinding the granules. These methods require special apparatuses, machines, auxiliary equipment, created on the basis of different methods of compiled mathematical models. When roasting, simulating the heat fields arising in the working chamber, provide such conditions, the decomposition of a portion of the starch to monosaccharides, which makes the grain sweetish, but due to protein denaturation the digestibility of the protein and the availability of amino acids decrease somewhat. Grain is roasted mainly for young animals in order to teach them to eat food at an early age, stimulate the secretory activity of digestion, better development of the masticatory muscles. In addition, the high temperature is detrimental to bacterial contamination and various types of fungi, which largely avoids possible diseases of the gastrointestinal tract. This method has found wide application directly on the farms. Apply when used in feeding animals and legumes: peas, soy, lupine and lentils. These feeds are preliminarily ground, and then cooked or steamed for 1 hour for 30–40 minutes. In the feed mill. Such processing of feeds allows inactivating the anti-nutrients in them, which reduce the effectiveness of their use. After processing, legumes are used as protein supplements in an amount of 25–30% of the total nutritional value of the diet. But it is recommended to cook and steal a grain of good quality. A poor-quality grain that has been stored for a long time and damaged by pathogenic micro flora is subject to

  17. Modelling and Generating Ajax Applications : A Model-Driven Approach

    NARCIS (Netherlands)

    Gharavi, V.; Mesbah, A.; Van Deursen, A.

    2008-01-01

    Preprint of paper published in: IWWOST 2008 - 7th International Workshop on Web-Oriented Software Technologies, 14-15 July 2008 AJAX is a promising and rapidly evolving approach for building highly interactive web applications. In AJAX, user interface components and the event-based interaction

  18. Understanding Gulf War Illness: An Integrative Modeling Approach

    Science.gov (United States)

    2017-10-01

    using a novel mathematical model. The computational biology approach will enable the consortium to quickly identify targets of dysfunction and find... computer / mathematical paradigms for evaluation of treatment strategies 12-30 50% Develop pilot clinical trials on basis of animal studies 24-36 60...the goal of testing chemical treatments. The immune and autonomic biomarkers will be tested using a computational modeling approach allowing for a

  19. Complementary Constrains on Component based Multiphase Flow Problems, Should It Be Implemented Locally or Globally?

    Science.gov (United States)

    Shao, H.; Huang, Y.; Kolditz, O.

    2015-12-01

    Multiphase flow problems are numerically difficult to solve, as it often contains nonlinear Phase transition phenomena A conventional technique is to introduce the complementarity constraints where fluid properties such as liquid saturations are confined within a physically reasonable range. Based on such constraints, the mathematical model can be reformulated into a system of nonlinear partial differential equations coupled with variational inequalities. They can be then numerically handled by optimization algorithms. In this work, two different approaches utilizing the complementarity constraints based on persistent primary variables formulation[4] are implemented and investigated. The first approach proposed by Marchand et.al[1] is using "local complementary constraints", i.e. coupling the constraints with the local constitutive equations. The second approach[2],[3] , namely the "global complementary constrains", applies the constraints globally with the mass conservation equation. We will discuss how these two approaches are applied to solve non-isothermal componential multiphase flow problem with the phase change phenomenon. Several benchmarks will be presented for investigating the overall numerical performance of different approaches. The advantages and disadvantages of different models will also be concluded. References[1] E.Marchand, T.Mueller and P.Knabner. Fully coupled generalized hybrid-mixed finite element approximation of two-phase two-component flow in porous media. Part I: formulation and properties of the mathematical model, Computational Geosciences 17(2): 431-442, (2013). [2] A. Lauser, C. Hager, R. Helmig, B. Wohlmuth. A new approach for phase transitions in miscible multi-phase flow in porous media. Water Resour., 34,(2011), 957-966. [3] J. Jaffré, and A. Sboui. Henry's Law and Gas Phase Disappearance. Transp. Porous Media. 82, (2010), 521-526. [4] A. Bourgeat, M. Jurak and F. Smaï. Two-phase partially miscible flow and transport modeling in

  20. A Structural Modeling Approach to a Multilevel Random Coefficients Model.

    Science.gov (United States)

    Rovine, Michael J.; Molenaar, Peter C. M.

    2000-01-01

    Presents a method for estimating the random coefficients model using covariance structure modeling and allowing one to estimate both fixed and random effects. The method is applied to real and simulated data, including marriage data from J. Belsky and M. Rovine (1990). (SLD)

  1. Data Analysis A Model Comparison Approach, Second Edition

    CERN Document Server

    Judd, Charles M; Ryan, Carey S

    2008-01-01

    This completely rewritten classic text features many new examples, insights and topics including mediational, categorical, and multilevel models. Substantially reorganized, this edition provides a briefer, more streamlined examination of data analysis. Noted for its model-comparison approach and unified framework based on the general linear model, the book provides readers with a greater understanding of a variety of statistical procedures. This consistent framework, including consistent vocabulary and notation, is used throughout to develop fewer but more powerful model building techniques. T

  2. A Component-based Software Development and Execution Framework for CAx Applications

    Directory of Open Access Journals (Sweden)

    N. Matsuki

    2004-01-01

    Full Text Available Digitalization of the manufacturing process and technologies is regarded as the key to increased competitive ability. The MZ-Platform infrastructure is a component-based software development framework, designed for supporting enterprises to enhance digitalized technologies using software tools and CAx components in a self-innovative way. In the paper we show the algorithm, system architecture, and a CAx application example on MZ-Platform. We also propose a new parametric data structure based on MZ-Platform.

  3. A novel approach to modeling and diagnosing the cardiovascular system

    Energy Technology Data Exchange (ETDEWEB)

    Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T. [Pacific Northwest Lab., Richland, WA (United States); Allen, P.A. [Life Link, Richland, WA (United States)

    1995-07-01

    A novel approach to modeling and diagnosing the cardiovascular system is introduced. A model exhibits a subset of the dynamics of the cardiovascular behavior of an individual by using a recurrent artificial neural network. Potentially, a model will be incorporated into a cardiovascular diagnostic system. This approach is unique in that each cardiovascular model is developed from physiological measurements of an individual. Any differences between the modeled variables and the variables of an individual at a given time are used for diagnosis. This approach also exploits sensor fusion to optimize the utilization of biomedical sensors. The advantage of sensor fusion has been demonstrated in applications including control and diagnostics of mechanical and chemical processes.

  4. Age estimation in forensic anthropology: quantification of observer error in phase versus component-based methods.

    Science.gov (United States)

    Shirley, Natalie R; Ramirez Montes, Paula Andrea

    2015-01-01

    The purpose of this study was to assess observer error in phase versus component-based scoring systems used to develop age estimation methods in forensic anthropology. A method preferred by forensic anthropologists in the AAFS was selected for this evaluation (the Suchey-Brooks method for the pubic symphysis). The Suchey-Brooks descriptions were used to develop a corresponding component-based scoring system for comparison. Several commonly used reliability statistics (kappa, weighted kappa, and the intraclass correlation coefficient) were calculated to assess observer agreement between two observers and to evaluate the efficacy of each of these statistics for this study. The linear weighted kappa was determined to be the most suitable measure of observer agreement. The results show that a component-based system offers the possibility for more objective scoring than a phase system as long as the coding possibilities for each trait do not exceed three states of expression, each with as little overlap as possible. © 2014 American Academy of Forensic Sciences.

  5. Synthesis of industrial applications of local approach to fracture models

    International Nuclear Information System (INIS)

    Eripret, C.

    1993-03-01

    This report gathers different applications of local approach to fracture models to various industrial configurations, such as nuclear pressure vessel steel, cast duplex stainless steels, or primary circuit welds such as bimetallic welds. As soon as models are developed on the basis of microstructural observations, damage mechanisms analyses, and fracture process, the local approach to fracture proves to solve problems where classical fracture mechanics concepts fail. Therefore, local approach appears to be a powerful tool, which completes the standard fracture criteria used in nuclear industry by exhibiting where and why those classical concepts become unvalid. (author). 1 tab., 18 figs., 25 refs

  6. Mathematical models for therapeutic approaches to control HIV disease transmission

    CERN Document Server

    Roy, Priti Kumar

    2015-01-01

    The book discusses different therapeutic approaches based on different mathematical models to control the HIV/AIDS disease transmission. It uses clinical data, collected from different cited sources, to formulate the deterministic as well as stochastic mathematical models of HIV/AIDS. It provides complementary approaches, from deterministic and stochastic points of view, to optimal control strategy with perfect drug adherence and also tries to seek viewpoints of the same issue from different angles with various mathematical models to computer simulations. The book presents essential methods and techniques for students who are interested in designing epidemiological models on HIV/AIDS. It also guides research scientists, working in the periphery of mathematical modeling, and helps them to explore a hypothetical method by examining its consequences in the form of a mathematical modelling and making some scientific predictions. The model equations, mathematical analysis and several numerical simulations that are...

  7. A model-driven approach to information security compliance

    Science.gov (United States)

    Correia, Anacleto; Gonçalves, António; Teodoro, M. Filomena

    2017-06-01

    The availability, integrity and confidentiality of information are fundamental to the long-term survival of any organization. Information security is a complex issue that must be holistically approached, combining assets that support corporate systems, in an extended network of business partners, vendors, customers and other stakeholders. This paper addresses the conception and implementation of information security systems, conform the ISO/IEC 27000 set of standards, using the model-driven approach. The process begins with the conception of a domain level model (computation independent model) based on information security vocabulary present in the ISO/IEC 27001 standard. Based on this model, after embedding in the model mandatory rules for attaining ISO/IEC 27001 conformance, a platform independent model is derived. Finally, a platform specific model serves the base for testing the compliance of information security systems with the ISO/IEC 27000 set of standards.

  8. A Model-Driven Approach for Telecommunications Network Services Definition

    Science.gov (United States)

    Chiprianov, Vanea; Kermarrec, Yvon; Alff, Patrick D.

    Present day Telecommunications market imposes a short concept-to-market time for service providers. To reduce it, we propose a computer-aided, model-driven, service-specific tool, with support for collaborative work and for checking properties on models. We started by defining a prototype of the Meta-model (MM) of the service domain. Using this prototype, we defined a simple graphical modeling language specific for service designers. We are currently enlarging the MM of the domain using model transformations from Network Abstractions Layers (NALs). In the future, we will investigate approaches to ensure the support for collaborative work and for checking properties on models.

  9. An approach for activity-based DEVS model specification

    DEFF Research Database (Denmark)

    Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram

    2016-01-01

    Creation of DEVS models has been advanced through Model Driven Architecture and its frameworks. The overarching role of the frameworks has been to help develop model specifications in a disciplined fashion. Frameworks can provide intermediary layers between the higher level mathematical models...... and their corresponding software specifications from both structural and behavioral aspects. Unlike structural modeling, developing models to specify behavior of systems is known to be harder and more complex, particularly when operations with non-trivial control schemes are required. In this paper, we propose specifying...... activity-based behavior modeling of parallel DEVS atomic models. We consider UML activities and actions as fundamental units of behavior modeling, especially in the presence of recent advances in the UML 2.5 specifications. We describe in detail how to approach activity modeling with a set of elemental...

  10. Modelling diversity in building occupant behaviour: a novel statistical approach

    DEFF Research Database (Denmark)

    Haldi, Frédéric; Calì, Davide; Andersen, Rune Korsholm

    2016-01-01

    We propose an advanced modelling framework to predict the scope and effects of behavioural diversity regarding building occupant actions on window openings, shading devices and lighting. We develop a statistical approach based on generalised linear mixed models to account for the longitudinal nat...

  11. Sensitivity analysis approaches applied to systems biology models.

    Science.gov (United States)

    Zi, Z

    2011-11-01

    With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.

  12. A qualitative evaluation approach for energy system modelling frameworks

    DEFF Research Database (Denmark)

    Wiese, Frauke; Hilpert, Simon; Kaldemeyer, Cord

    2018-01-01

    properties define how useful it is in regard to the existing challenges. For energy system models, evaluation methods exist, but we argue that many decisions upon properties are rather made on the model generator or framework level. Thus, this paper presents a qualitative approach to evaluate frameworks...

  13. Modeling Alaska boreal forests with a controlled trend surface approach

    Science.gov (United States)

    Mo Zhou; Jingjing Liang

    2012-01-01

    An approach of Controlled Trend Surface was proposed to simultaneously take into consideration large-scale spatial trends and nonspatial effects. A geospatial model of the Alaska boreal forest was developed from 446 permanent sample plots, which addressed large-scale spatial trends in recruitment, diameter growth, and mortality. The model was tested on two sets of...

  14. Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of

  15. Towards modeling future energy infrastructures - the ELECTRA system engineering approach

    DEFF Research Database (Denmark)

    Uslar, Mathias; Heussen, Kai

    2016-01-01

    of the IEC 62559 use case template as well as needed changes to cope particularly with the aspects of controller conflicts and Greenfield technology modeling. From the original envisioned use of the standards, we show a possible transfer on how to properly deal with a Greenfield approach when modeling....

  16. A Model-Driven Approach to e-Course Management

    Science.gov (United States)

    Savic, Goran; Segedinac, Milan; Milenkovic, Dušica; Hrin, Tamara; Segedinac, Mirjana

    2018-01-01

    This paper presents research on using a model-driven approach to the development and management of electronic courses. We propose a course management system which stores a course model represented as distinct machine-readable components containing domain knowledge of different course aspects. Based on this formally defined platform-independent…

  17. A study of multidimensional modeling approaches for data warehouse

    Science.gov (United States)

    Yusof, Sharmila Mat; Sidi, Fatimah; Ibrahim, Hamidah; Affendey, Lilly Suriani

    2016-08-01

    Data warehouse system is used to support the process of organizational decision making. Hence, the system must extract and integrate information from heterogeneous data sources in order to uncover relevant knowledge suitable for decision making process. However, the development of data warehouse is a difficult and complex process especially in its conceptual design (multidimensional modeling). Thus, there have been various approaches proposed to overcome the difficulty. This study surveys and compares the approaches of multidimensional modeling and highlights the issues, trend and solution proposed to date. The contribution is on the state of the art of the multidimensional modeling design.

  18. Gray-box modelling approach for description of storage tunnel

    DEFF Research Database (Denmark)

    Harremoës, Poul; Carstensen, Jacob

    1999-01-01

    The dynamics of a storage tunnel is examined using a model based on on-line measured data and a combination of simple deterministic and black-box stochastic elements. This approach, called gray-box modeling, is a new promising methodology for giving an on-line state description of sewer systems...... of the water in the overflow structures. The capacity of a pump draining the storage tunnel is estimated for two different rain events, revealing that the pump was malfunctioning during the first rain event. The proposed modeling approach can be used in automated online surveillance and control and implemented...

  19. Meta-analysis a structural equation modeling approach

    CERN Document Server

    Cheung, Mike W-L

    2015-01-01

    Presents a novel approach to conducting meta-analysis using structural equation modeling. Structural equation modeling (SEM) and meta-analysis are two powerful statistical methods in the educational, social, behavioral, and medical sciences. They are often treated as two unrelated topics in the literature. This book presents a unified framework on analyzing meta-analytic data within the SEM framework, and illustrates how to conduct meta-analysis using the metaSEM package in the R statistical environment. Meta-Analysis: A Structural Equation Modeling Approach begins by introducing the impo

  20. Learning the Task Management Space of an Aircraft Approach Model

    Science.gov (United States)

    Krall, Joseph; Menzies, Tim; Davies, Misty

    2014-01-01

    Validating models of airspace operations is a particular challenge. These models are often aimed at finding and exploring safety violations, and aim to be accurate representations of real-world behavior. However, the rules governing the behavior are quite complex: nonlinear physics, operational modes, human behavior, and stochastic environmental concerns all determine the responses of the system. In this paper, we present a study on aircraft runway approaches as modeled in Georgia Tech's Work Models that Compute (WMC) simulation. We use a new learner, Genetic-Active Learning for Search-Based Software Engineering (GALE) to discover the Pareto frontiers defined by cognitive structures. These cognitive structures organize the prioritization and assignment of tasks of each pilot during approaches. We discuss the benefits of our approach, and also discuss future work necessary to enable uncertainty quantification.

  1. QAM: PROPOSED MODEL FOR QUALITY ASSURANCE IN CBSS

    Directory of Open Access Journals (Sweden)

    Latika Kharb

    2015-08-01

    Full Text Available Component-based software engineering (CBSE / Component-Based Development (CBD lays emphasis on decomposition of the engineered systems into functional or logical components with well-defined interfaces used for communication across the components. Component-based software development approach is based on the idea to develop software systems by selecting appropriate off-the-shelf components and then to assemble them with a well-defined software architecture. Because the new software development paradigm is much different from the traditional approach, quality assurance for component-based software development is a new topic in the software engineering research community. Because component-based software systems are developed on an underlying process different from that of the traditional software, their quality assurance model should address both the process of components and the process of the overall system. Quality assurance for component-based software systems during the life cycle is used to analyze the components for achievement of high quality component-based software systems. Although some Quality assurance techniques and component based approach to software engineering have been studied, there is still no clear and well-defined standard or guidelines for component-based software systems. Therefore, identification of the quality assurance characteristics, quality assurance models, quality assurance tools and quality assurance metrics, are under urgent need. As a major contribution in this paper, I have proposed QAM: Quality Assurance Model for component-based software development, which covers component requirement analysis, component development, component certification, component architecture design, integration, testing, and maintenance.

  2. A Model for the Measurement of the Runtime Testability of Component-based Systems

    NARCIS (Netherlands)

    González, A.; Piel, E.; Gross, H.G.

    2009-01-01

    Version note: Paper submitted for review at the 5th AMOST Workshop. Runtime testing is emerging as the solution for the integration and validation of software systems where traditional development-time integration testing cannot be performed, such as Systems of Systems or Service Oriented

  3. A novel approach of modeling continuous dark hydrogen fermentation.

    Science.gov (United States)

    Alexandropoulou, Maria; Antonopoulou, Georgia; Lyberatos, Gerasimos

    2018-02-01

    In this study a novel modeling approach for describing fermentative hydrogen production in a continuous stirred tank reactor (CSTR) was developed, using the Aquasim modeling platform. This model accounts for the key metabolic reactions taking place in a fermentative hydrogen producing reactor, using fixed stoichiometry but different reaction rates. Biomass yields are determined based on bioenergetics. The model is capable of describing very well the variation in the distribution of metabolic products for a wide range of hydraulic retention times (HRT). The modeling approach is demonstrated using the experimental data obtained from a CSTR, fed with food industry waste (FIW), operating at different HRTs. The kinetic parameters were estimated through fitting to the experimental results. Hydrogen and total biogas production rates were predicted very well by the model, validating the basic assumptions regarding the implicated stoichiometric biochemical reactions and their kinetic rates. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. An integrated modeling approach to age invariant face recognition

    Science.gov (United States)

    Alvi, Fahad Bashir; Pears, Russel

    2015-03-01

    This Research study proposes a novel method for face recognition based on Anthropometric features that make use of an integrated approach comprising of a global and personalized models. The system is aimed to at situations where lighting, illumination, and pose variations cause problems in face recognition. A Personalized model covers the individual aging patterns while a Global model captures general aging patterns in the database. We introduced a de-aging factor that de-ages each individual in the database test and training sets. We used the k nearest neighbor approach for building a personalized model and global model. Regression analysis was applied to build the models. During the test phase, we resort to voting on different features. We used FG-Net database for checking the results of our technique and achieved 65 percent Rank 1 identification rate.

  5. Benchmarking novel approaches for modelling species range dynamics.

    Science.gov (United States)

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E

    2016-08-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches

  6. On a model-based approach to radiation protection

    International Nuclear Information System (INIS)

    Waligorski, M.P.R.

    2002-01-01

    There is a preoccupation with linearity and absorbed dose as the basic quantifiers of radiation hazard. An alternative is the fluence approach, whereby radiation hazard may be evaluated, at least in principle, via an appropriate action cross section. In order to compare these approaches, it may be useful to discuss them as quantitative descriptors of survival and transformation-like endpoints in cell cultures in vitro - a system thought to be relevant to modelling radiation hazard. If absorbed dose is used to quantify these biological endpoints, then non-linear dose-effect relations have to be described, and, e.g. after doses of densely ionising radiation, dose-correction factors as high as 20 are required. In the fluence approach only exponential effect-fluence relationships can be readily described. Neither approach alone exhausts the scope of experimentally observed dependencies of effect on dose or fluence. Two-component models, incorporating a suitable mixture of the two approaches, are required. An example of such a model is the cellular track structure theory developed by Katz over thirty years ago. The practical consequences of modelling radiation hazard using this mixed two-component approach are discussed. (author)

  7. Modeling gene expression measurement error: a quasi-likelihood approach

    Directory of Open Access Journals (Sweden)

    Strimmer Korbinian

    2003-03-01

    Full Text Available Abstract Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale. Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood. Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic variance structure of the data. As the quasi-likelihood behaves (almost like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also

  8. A review of function modeling: Approaches and applications

    OpenAIRE

    Erden, M.S.; Komoto, H.; Van Beek, T.J.; D'Amelio, V.; Echavarria, E.; Tomiyama, T.

    2008-01-01

    This work is aimed at establishing a common frame and understanding of function modeling (FM) for our ongoing research activities. A comparative review of the literature is performed to grasp the various FM approaches with their commonalities and differences. The relations of FM with the research fields of artificial intelligence, design theory, and maintenance are discussed. In this discussion the goals are to highlight the features of various classical approaches in relation to FM, to delin...

  9. Top-down approach to unified supergravity models

    International Nuclear Information System (INIS)

    Hempfling, R.

    1994-03-01

    We introduce a new approach for studying unified supergravity models. In this approach all the parameters of the grand unified theory (GUT) are fixed by imposing the corresponding number of low energy observables. This determines the remaining particle spectrum whose dependence on the low energy observables can now be investigated. We also include some SUSY threshold corrections that have previously been neglected. In particular the SUSY threshold corrections to the fermion masses can have a significant impact on the Yukawa coupling unification. (orig.)

  10. Intelligent Transportation and Evacuation Planning A Modeling-Based Approach

    CERN Document Server

    Naser, Arab

    2012-01-01

    Intelligent Transportation and Evacuation Planning: A Modeling-Based Approach provides a new paradigm for evacuation planning strategies and techniques. Recently, evacuation planning and modeling have increasingly attracted interest among researchers as well as government officials. This interest stems from the recent catastrophic hurricanes and weather-related events that occurred in the southeastern United States (Hurricane Katrina and Rita). The evacuation methods that were in place before and during the hurricanes did not work well and resulted in thousands of deaths. This book offers insights into the methods and techniques that allow for implementing mathematical-based, simulation-based, and integrated optimization and simulation-based engineering approaches for evacuation planning. This book also: Comprehensively discusses the application of mathematical models for evacuation and intelligent transportation modeling Covers advanced methodologies in evacuation modeling and planning Discusses principles a...

  11. An object-oriented approach to energy-economic modeling

    Energy Technology Data Exchange (ETDEWEB)

    Wise, M.A.; Fox, J.A.; Sands, R.D.

    1993-12-01

    In this paper, the authors discuss the experiences in creating an object-oriented economic model of the U.S. energy and agriculture markets. After a discussion of some central concepts, they provide an overview of the model, focusing on the methodology of designing an object-oriented class hierarchy specification based on standard microeconomic production functions. The evolution of the model from the class definition stage to programming it in C++, a standard object-oriented programming language, will be detailed. The authors then discuss the main differences between writing the object-oriented program versus a procedure-oriented program of the same model. Finally, they conclude with a discussion of the advantages and limitations of the object-oriented approach based on the experience in building energy-economic models with procedure-oriented approaches and languages.

  12. Multi-model approach to characterize human handwriting motion.

    Science.gov (United States)

    Chihi, I; Abdelkrim, A; Benrejeb, M

    2016-02-01

    This paper deals with characterization and modelling of human handwriting motion from two forearm muscle activity signals, called electromyography signals (EMG). In this work, an experimental approach was used to record the coordinates of a pen tip moving on the (x, y) plane and EMG signals during the handwriting act. The main purpose is to design a new mathematical model which characterizes this biological process. Based on a multi-model approach, this system was originally developed to generate letters and geometric forms written by different writers. A Recursive Least Squares algorithm is used to estimate the parameters of each sub-model of the multi-model basis. Simulations show good agreement between predicted results and the recorded data.

  13. Wave Resource Characterization Using an Unstructured Grid Modeling Approach

    Directory of Open Access Journals (Sweden)

    Wei-Cheng Wu

    2018-03-01

    Full Text Available This paper presents a modeling study conducted on the central Oregon coast for wave resource characterization, using the unstructured grid Simulating WAve Nearshore (SWAN model coupled with a nested grid WAVEWATCH III® (WWIII model. The flexibility of models with various spatial resolutions and the effects of open boundary conditions simulated by a nested grid WWIII model with different physics packages were evaluated. The model results demonstrate the advantage of the unstructured grid-modeling approach for flexible model resolution and good model skills in simulating the six wave resource parameters recommended by the International Electrotechnical Commission in comparison to the observed data in Year 2009 at National Data Buoy Center Buoy 46050. Notably, spectral analysis indicates that the ST4 physics package improves upon the ST2 physics package’s ability to predict wave power density for large waves, which is important for wave resource assessment, load calculation of devices, and risk management. In addition, bivariate distributions show that the simulated sea state of maximum occurrence with the ST4 physics package matched the observed data better than with the ST2 physics package. This study demonstrated that the unstructured grid wave modeling approach, driven by regional nested grid WWIII outputs along with the ST4 physics package, can efficiently provide accurate wave hindcasts to support wave resource characterization. Our study also suggests that wind effects need to be considered if the dimension of the model domain is greater than approximately 100 km, or O (102 km.

  14. A comprehensive dynamic modeling approach for giant magnetostrictive material actuators

    International Nuclear Information System (INIS)

    Gu, Guo-Ying; Zhu, Li-Min; Li, Zhi; Su, Chun-Yi

    2013-01-01

    In this paper, a comprehensive modeling approach for a giant magnetostrictive material actuator (GMMA) is proposed based on the description of nonlinear electromagnetic behavior, the magnetostrictive effect and frequency response of the mechanical dynamics. It maps the relationships between current and magnetic flux at the electromagnetic part to force and displacement at the mechanical part in a lumped parameter form. Towards this modeling approach, the nonlinear hysteresis effect of the GMMA appearing only in the electrical part is separated from the linear dynamic plant in the mechanical part. Thus, a two-module dynamic model is developed to completely characterize the hysteresis nonlinearity and the dynamic behaviors of the GMMA. The first module is a static hysteresis model to describe the hysteresis nonlinearity, and the cascaded second module is a linear dynamic plant to represent the dynamic behavior. To validate the proposed dynamic model, an experimental platform is established. Then, the linear dynamic part and the nonlinear hysteresis part of the proposed model are identified in sequence. For the linear part, an approach based on axiomatic design theory is adopted. For the nonlinear part, a Prandtl–Ishlinskii model is introduced to describe the hysteresis nonlinearity and a constrained quadratic optimization method is utilized to identify its coefficients. Finally, experimental tests are conducted to demonstrate the effectiveness of the proposed dynamic model and the corresponding identification method. (paper)

  15. A website evaluation model by integration of previous evaluation models using a quantitative approach

    Directory of Open Access Journals (Sweden)

    Ali Moeini

    2015-01-01

    Full Text Available Regarding the ecommerce growth, websites play an essential role in business success. Therefore, many authors have offered website evaluation models since 1995. Although, the multiplicity and diversity of evaluation models make it difficult to integrate them into a single comprehensive model. In this paper a quantitative method has been used to integrate previous models into a comprehensive model that is compatible with them. In this approach the researcher judgment has no role in integration of models and the new model takes its validity from 93 previous models and systematic quantitative approach.

  16. Smeared crack modelling approach for corrosion-induced concrete damage

    DEFF Research Database (Denmark)

    Thybo, Anna Emilie Anusha; Michel, Alexander; Stang, Henrik

    2017-01-01

    In this paper a smeared crack modelling approach is used to simulate corrosion-induced damage in reinforced concrete. The presented modelling approach utilizes a thermal analogy to mimic the expansive nature of solid corrosion products, while taking into account the penetration of corrosion...... products into the surrounding concrete, non-uniform precipitation of corrosion products, and creep. To demonstrate the applicability of the presented modelling approach, numerical predictions in terms of corrosion-induced deformations as well as formation and propagation of micro- and macrocracks were......-induced damage phenomena in reinforced concrete. Moreover, good agreements were also found between experimental and numerical data for corrosion-induced deformations along the circumference of the reinforcement....

  17. A model-data based systems approach to process intensification

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    . Their developments, however, are largely due to experiment based trial and error approaches and while they do not require validation, they can be time consuming and resource intensive. Also, one may ask, can a truly new intensified unit operation be obtained in this way? An alternative two-stage approach is to apply...... a model-based synthesis method to systematically generate and evaluate alternatives in the first stage and an experiment-model based validation in the second stage. In this way, the search for alternatives is done very quickly, reliably and systematically over a wide range, while resources are preserved...... for focused validation of only the promising candidates in the second-stage. This approach, however, would be limited to intensification based on “known” unit operations, unless the PI process synthesis/design is considered at a lower level of aggregation, namely the phenomena level. That is, the model-based...

  18. METHODOLOGICAL APPROACHES FOR MODELING THE RURAL SETTLEMENT DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Gorbenkova Elena Vladimirovna

    2017-10-01

    Full Text Available Subject: the paper describes the research results on validation of a rural settlement developmental model. The basic methods and approaches for solving the problem of assessment of the urban and rural settlement development efficiency are considered. Research objectives: determination of methodological approaches to modeling and creating a model for the development of rural settlements. Materials and methods: domestic and foreign experience in modeling the territorial development of urban and rural settlements and settlement structures was generalized. The motivation for using the Pentagon-model for solving similar problems was demonstrated. Based on a systematic analysis of existing development models of urban and rural settlements as well as the authors-developed method for assessing the level of agro-towns development, the systems/factors that are necessary for a rural settlement sustainable development are identified. Results: we created the rural development model which consists of five major systems that include critical factors essential for achieving a sustainable development of a settlement system: ecological system, economic system, administrative system, anthropogenic (physical system and social system (supra-structure. The methodological approaches for creating an evaluation model of rural settlements development were revealed; the basic motivating factors that provide interrelations of systems were determined; the critical factors for each subsystem were identified and substantiated. Such an approach was justified by the composition of tasks for territorial planning of the local and state administration levels. The feasibility of applying the basic Pentagon-model, which was successfully used for solving the analogous problems of sustainable development, was shown. Conclusions: the resulting model can be used for identifying and substantiating the critical factors for rural sustainable development and also become the basis of

  19. An algebraic approach to modeling in software engineering

    International Nuclear Information System (INIS)

    Loegel, C.J.; Ravishankar, C.V.

    1993-09-01

    Our work couples the formalism of universal algebras with the engineering techniques of mathematical modeling to develop a new approach to the software engineering process. Our purpose in using this combination is twofold. First, abstract data types and their specification using universal algebras can be considered a common point between the practical requirements of software engineering and the formal specification of software systems. Second, mathematical modeling principles provide us with a means for effectively analyzing real-world systems. We first use modeling techniques to analyze a system and then represent the analysis using universal algebras. The rest of the software engineering process exploits properties of universal algebras that preserve the structure of our original model. This paper describes our software engineering process and our experience using it on both research and commercial systems. We need a new approach because current software engineering practices often deliver software that is difficult to develop and maintain. Formal software engineering approaches use universal algebras to describe ''computer science'' objects like abstract data types, but in practice software errors are often caused because ''real-world'' objects are improperly modeled. There is a large semantic gap between the customer's objects and abstract data types. In contrast, mathematical modeling uses engineering techniques to construct valid models for real-world systems, but these models are often implemented in an ad hoc manner. A combination of the best features of both approaches would enable software engineering to formally specify and develop software systems that better model real systems. Software engineering, like mathematical modeling, should concern itself first and foremost with understanding a real system and its behavior under given circumstances, and then with expressing this knowledge in an executable form

  20. Towards a 3d Spatial Urban Energy Modelling Approach

    Science.gov (United States)

    Bahu, J.-M.; Koch, A.; Kremers, E.; Murshed, S. M.

    2013-09-01

    Today's needs to reduce the environmental impact of energy use impose dramatic changes for energy infrastructure and existing demand patterns (e.g. buildings) corresponding to their specific context. In addition, future energy systems are expected to integrate a considerable share of fluctuating power sources and equally a high share of distributed generation of electricity. Energy system models capable of describing such future systems and allowing the simulation of the impact of these developments thus require a spatial representation in order to reflect the local context and the boundary conditions. This paper describes two recent research approaches developed at EIFER in the fields of (a) geo-localised simulation of heat energy demand in cities based on 3D morphological data and (b) spatially explicit Agent-Based Models (ABM) for the simulation of smart grids. 3D city models were used to assess solar potential and heat energy demand of residential buildings which enable cities to target the building refurbishment potentials. Distributed energy systems require innovative modelling techniques where individual components are represented and can interact. With this approach, several smart grid demonstrators were simulated, where heterogeneous models are spatially represented. Coupling 3D geodata with energy system ABMs holds different advantages for both approaches. On one hand, energy system models can be enhanced with high resolution data from 3D city models and their semantic relations. Furthermore, they allow for spatial analysis and visualisation of the results, with emphasis on spatially and structurally correlations among the different layers (e.g. infrastructure, buildings, administrative zones) to provide an integrated approach. On the other hand, 3D models can benefit from more detailed system description of energy infrastructure, representing dynamic phenomena and high resolution models for energy use at component level. The proposed modelling strategies

  1. Modelling of ductile and cleavage fracture by local approach

    International Nuclear Information System (INIS)

    Samal, M.K.; Dutta, B.K.; Kushwaha, H.S.

    2000-08-01

    This report describes the modelling of ductile and cleavage fracture processes by local approach. It is now well known that the conventional fracture mechanics method based on single parameter criteria is not adequate to model the fracture processes. It is because of the existence of effect of size and geometry of flaw, loading type and rate on the fracture resistance behaviour of any structure. Hence, it is questionable to use same fracture resistance curves as determined from standard tests in the analysis of real life components because of existence of all the above effects. So, there is need to have a method in which the parameters used for the analysis will be true material properties, i.e. independent of geometry and size. One of the solutions to the above problem is the use of local approaches. These approaches have been extensively studied and applied to different materials (including SA33 Gr.6) in this report. Each method has been studied and reported in a separate section. This report has been divided into five sections. Section-I gives a brief review of the fundamentals of fracture process. Section-II deals with modelling of ductile fracture by locally uncoupled type of models. In this section, the critical cavity growth parameters of the different models have been determined for the primary heat transport (PHT) piping material of Indian pressurised heavy water reactor (PHWR). A comparative study has been done among different models. The dependency of the critical parameters on stress triaxiality factor has also been studied. It is observed that Rice and Tracey's model is the most suitable one. But, its parameters are not fully independent of triaxiality factor. For this purpose, a modification to Rice and Tracery's model is suggested in Section-III. Section-IV deals with modelling of ductile fracture process by locally coupled type of models. Section-V deals with the modelling of cleavage fracture process by Beremins model, which is based on Weibulls

  2. Atomistic approach for modeling metal-semiconductor interfaces

    DEFF Research Database (Denmark)

    Stradi, Daniele; Martinez, Umberto; Blom, Anders

    2016-01-01

    realistic metal-semiconductor interfaces and allows for a direct comparison between theory and experiments via the I–V curve. In particular, it will be demonstrated how doping — and bias — modifies the Schottky barrier, and how finite size models (the slab approach) are unable to describe these interfaces......We present a general framework for simulating interfaces using an atomistic approach based on density functional theory and non-equilibrium Green's functions. The method includes all the relevant ingredients, such as doping and an accurate value of the semiconductor band gap, required to model...

  3. Systems and context modeling approach to requirements analysis

    Science.gov (United States)

    Ahuja, Amrit; Muralikrishna, G.; Patwari, Puneet; Subhrojyoti, C.; Swaminathan, N.; Vin, Harrick

    2014-08-01

    Ensuring completeness and correctness of the requirements for a complex system such as the SKA is challenging. Current system engineering practice includes developing a stakeholder needs definition, a concept of operations, and defining system requirements in terms of use cases and requirements statements. We present a method that enhances this current practice into a collection of system models with mutual consistency relationships. These include stakeholder goals, needs definition and system-of-interest models, together with a context model that participates in the consistency relationships among these models. We illustrate this approach by using it to analyze the SKA system requirements.

  4. An approach to multiscale modelling with graph grammars.

    Science.gov (United States)

    Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried

    2014-09-01

    Functional-structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models.

  5. A robust quantitative near infrared modeling approach for blend monitoring.

    Science.gov (United States)

    Mohan, Shikhar; Momose, Wataru; Katz, Jeffrey M; Hossain, Md Nayeem; Velez, Natasha; Drennen, James K; Anderson, Carl A

    2018-01-30

    This study demonstrates a material sparing Near-Infrared modeling approach for powder blend monitoring. In this new approach, gram scale powder mixtures are subjected to compression loads to simulate the effect of scale using an Instron universal testing system. Models prepared by the new method development approach (small-scale method) and by a traditional method development (blender-scale method) were compared by simultaneously monitoring a 1kg batch size blend run. Both models demonstrated similar model performance. The small-scale method strategy significantly reduces the total resources expended to develop Near-Infrared calibration models for on-line blend monitoring. Further, this development approach does not require the actual equipment (i.e., blender) to which the method will be applied, only a similar optical interface. Thus, a robust on-line blend monitoring method can be fully developed before any large-scale blending experiment is viable, allowing the blend method to be used during scale-up and blend development trials. Copyright © 2017. Published by Elsevier B.V.

  6. Deep Appearance Models: A Deep Boltzmann Machine Approach for Face Modeling

    OpenAIRE

    Duong, Chi Nhan; Luu, Khoa; Quach, Kha Gia; Bui, Tien D.

    2016-01-01

    The "interpretation through synthesis" approach to analyze face images, particularly Active Appearance Models (AAMs) method, has become one of the most successful face modeling approaches over the last two decades. AAM models have ability to represent face images through synthesis using a controllable parameterized Principal Component Analysis (PCA) model. However, the accuracy and robustness of the synthesized faces of AAM are highly depended on the training sets and inherently on the genera...

  7. Technical note: Comparison of methane ebullition modelling approaches used in terrestrial wetland models

    Science.gov (United States)

    Peltola, Olli; Raivonen, Maarit; Li, Xuefei; Vesala, Timo

    2018-02-01

    Emission via bubbling, i.e. ebullition, is one of the main methane (CH4) emission pathways from wetlands to the atmosphere. Direct measurement of gas bubble formation, growth and release in the peat-water matrix is challenging and in consequence these processes are relatively unknown and are coarsely represented in current wetland CH4 emission models. In this study we aimed to evaluate three ebullition modelling approaches and their effect on model performance. This was achieved by implementing the three approaches in one process-based CH4 emission model. All the approaches were based on some kind of threshold: either on CH4 pore water concentration (ECT), pressure (EPT) or free-phase gas volume (EBG) threshold. The model was run using 4 years of data from a boreal sedge fen and the results were compared with eddy covariance measurements of CH4 fluxes.Modelled annual CH4 emissions were largely unaffected by the different ebullition modelling approaches; however, temporal variability in CH4 emissions varied an order of magnitude between the approaches. Hence the ebullition modelling approach drives the temporal variability in modelled CH4 emissions and therefore significantly impacts, for instance, high-frequency (daily scale) model comparison and calibration against measurements. The modelling approach based on the most recent knowledge of the ebullition process (volume threshold, EBG) agreed the best with the measured fluxes (R2 = 0.63) and hence produced the most reasonable results, although there was a scale mismatch between the measurements (ecosystem scale with heterogeneous ebullition locations) and model results (single horizontally homogeneous peat column). The approach should be favoured over the two other more widely used ebullition modelling approaches and researchers are encouraged to implement it into their CH4 emission models.

  8. Software sensors based on the grey-box modelling approach

    DEFF Research Database (Denmark)

    Carstensen, J.; Harremoës, P.; Strube, Rune

    1996-01-01

    In recent years the grey-box modelling approach has been applied to wastewater transportation and treatment Grey-box models are characterized by the combination of deterministic and stochastic terms to form a model where all the parameters are statistically identifiable from the on......-box model for the specific dynamics is identified. Similarly, an on-line software sensor for detecting the occurrence of backwater phenomena can be developed by comparing the dynamics of a flow measurement with a nearby level measurement. For treatment plants it is found that grey-box models applied to on......-line measurements. With respect to the development of software sensors, the grey-box models possess two important features. Firstly, the on-line measurements can be filtered according to the grey-box model in order to remove noise deriving from the measuring equipment and controlling devices. Secondly, the grey...

  9. Bianchi VI0 and III models: self-similar approach

    International Nuclear Information System (INIS)

    Belinchon, Jose Antonio

    2009-01-01

    We study several cosmological models with Bianchi VI 0 and III symmetries under the self-similar approach. We find new solutions for the 'classical' perfect fluid model as well as for the vacuum model although they are really restrictive for the equation of state. We also study a perfect fluid model with time-varying constants, G and Λ. As in other studied models we find that the behaviour of G and Λ are related. If G behaves as a growing time function then Λ is a positive decreasing time function but if G is decreasing then Λ 0 is negative. We end by studying a massive cosmic string model, putting special emphasis in calculating the numerical values of the equations of state. We show that there is no SS solution for a string model with time-varying constants.

  10. Environmental Radiation Effects on Mammals A Dynamical Modeling Approach

    CERN Document Server

    Smirnova, Olga A

    2010-01-01

    This text is devoted to the theoretical studies of radiation effects on mammals. It uses the framework of developed deterministic mathematical models to investigate the effects of both acute and chronic irradiation in a wide range of doses and dose rates on vital body systems including hematopoiesis, small intestine and humoral immunity, as well as on the development of autoimmune diseases. Thus, these models can contribute to the development of the system and quantitative approaches in radiation biology and ecology. This text is also of practical use. Its modeling studies of the dynamics of granulocytopoiesis and thrombocytopoiesis in humans testify to the efficiency of employment of the developed models in the investigation and prediction of radiation effects on these hematopoietic lines. These models, as well as the properly identified models of other vital body systems, could provide a better understanding of the radiation risks to health. The modeling predictions will enable the implementation of more ef...

  11. A new approach to Naturalness in SUSY models

    CERN Document Server

    Ghilencea, D M

    2013-01-01

    We review recent results that provide a new approach to the old problem of naturalness in supersymmetric models, without relying on subjective definitions for the fine-tuning associated with {\\it fixing} the EW scale (to its measured value) in the presence of quantum corrections. The approach can address in a model-independent way many questions related to this problem. The results show that naturalness and its measure (fine-tuning) are an intrinsic part of the likelihood to fit the data that {\\it includes} the EW scale. One important consequence is that the additional {\\it constraint} of fixing the EW scale, usually not imposed in the data fits of the models, impacts on their overall likelihood to fit the data (or chi^2/ndf, ndf: number of degrees of freedom). This has negative implications for the viability of currently popular supersymmetric extensions of the Standard Model.

  12. Model selection and inference a practical information-theoretic approach

    CERN Document Server

    Burnham, Kenneth P

    1998-01-01

    This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...

  13. Merits of a Scenario Approach in Dredge Plume Modelling

    DEFF Research Database (Denmark)

    Pedersen, Claus; Chu, Amy Ling Chu; Hjelmager Jensen, Jacob

    2011-01-01

    Dredge plume modelling is a key tool for quantification of potential impacts to inform the EIA process. There are, however, significant uncertainties associated with the modelling at the EIA stage when both dredging methodology and schedule are likely to be a guess at best as the dredging...... contractor would rarely have been appointed. Simulation of a few variations of an assumed full dredge period programme will generally not provide a good representation of the overall environmental risks associated with the programme. An alternative dredge plume modelling strategy that attempts to encapsulate...... uncertainties associated with preliminary dredging programmes by using a scenario-based modelling approach is presented. The approach establishes a set of representative and conservative scenarios for key factors controlling the spill and plume dispersion and simulates all combinations of e.g. dredge, climatic...

  14. A Natural Component-Based Oxygen Indicator with In-Pack Activation for Intelligent Food Packaging.

    Science.gov (United States)

    Won, Keehoon; Jang, Nan Young; Jeon, Junsu

    2016-12-28

    Intelligent food packaging can provide consumers with reliable and correct information on the quality and safety of packaged foods. One of the key constituents in intelligent packaging is a colorimetric oxygen indicator, which is widely used to detect oxygen gas involved in food spoilage by means of a color change. Traditional oxygen indicators consisting of redox dyes and strong reducing agents have two major problems: they must be manufactured and stored under anaerobic conditions because air depletes the reductant, and their components are synthetic and toxic. To address both of these serious problems, we have developed a natural component-based oxygen indicator characterized by in-pack activation. The conventional oxygen indicator composed of synthetic and artificial components was redesigned using naturally occurring compounds (laccase, guaiacol, and cysteine). These natural components were physically separated into two compartments by a fragile barrier. Only when the barrier was broken were all of the components mixed and the function as an oxygen indicator was begun (i.e., in-pack activation). Depending on the component concentrations, the natural component-based oxygen indicator exhibited different response times and color differences. The rate of the color change was proportional to the oxygen concentration. This novel colorimetric oxygen indicator will contribute greatly to intelligent packaging for healthier and safer foods.

  15. Regularization of quantum gravity in the matrix model approach

    International Nuclear Information System (INIS)

    Ueda, Haruhiko

    1991-02-01

    We study divergence problem of the partition function in the matrix model approach for two-dimensional quantum gravity. We propose a new model V(φ) = 1/2Trφ 2 + g 4 /NTrφ 4 + g'/N 4 Tr(φ 4 ) 2 and show that in the sphere case it has no divergence problem and the critical exponent is of pure gravity. (author)

  16. PASSENGER TRAFFIC MOVEMENT MODELLING BY THE CELLULAR-AUTOMAT APPROACH

    Directory of Open Access Journals (Sweden)

    T. Mikhaylovskaya

    2009-01-01

    Full Text Available The mathematical model of passenger traffic movement developed on the basis of the cellular-automat approach is considered. The program realization of the cellular-automat model of pedastrians streams movement in pedestrian subways at presence of obstacles, at subway structure narrowing is presented. The optimum distances between the obstacles and the angle of subway structure narrowing providing pedastrians stream safe movement and traffic congestion occurance are determined.

  17. The Generalised Ecosystem Modelling Approach in Radiological Assessment

    International Nuclear Information System (INIS)

    Klos, Richard

    2008-03-01

    An independent modelling capability is required by SSI in order to evaluate dose assessments carried out in Sweden by, amongst others, SKB. The main focus is the evaluation of the long-term radiological safety of radioactive waste repositories for both spent fuel and low-level radioactive waste. To meet the requirement for an independent modelling tool for use in biosphere dose assessments, SSI through its modelling team CLIMB commissioned the development of a new model in 2004, a project to produce an integrated model of radionuclides in the landscape. The generalised ecosystem modelling approach (GEMA) is the result. GEMA is a modular system of compartments representing the surface environment. It can be configured, through water and solid material fluxes, to represent local details in the range of ecosystem types found in the past, present and future Swedish landscapes. The approach is generic but fine tuning can be carried out using local details of the surface drainage system. The modular nature of the modelling approach means that GEMA modules can be linked to represent large scale surface drainage features over an extended domain in the landscape. System change can also be managed in GEMA, allowing a flexible and comprehensive model of the evolving landscape to be constructed. Environmental concentrations of radionuclides can be calculated and the GEMA dose pathway model provides a means of evaluating the radiological impact of radionuclide release to the surface environment. This document sets out the philosophy and details of GEMA and illustrates the functioning of the model with a range of examples featuring the recent CLIMB review of SKB's SR-Can assessment

  18. The Generalised Ecosystem Modelling Approach in Radiological Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Klos, Richard

    2008-03-15

    An independent modelling capability is required by SSI in order to evaluate dose assessments carried out in Sweden by, amongst others, SKB. The main focus is the evaluation of the long-term radiological safety of radioactive waste repositories for both spent fuel and low-level radioactive waste. To meet the requirement for an independent modelling tool for use in biosphere dose assessments, SSI through its modelling team CLIMB commissioned the development of a new model in 2004, a project to produce an integrated model of radionuclides in the landscape. The generalised ecosystem modelling approach (GEMA) is the result. GEMA is a modular system of compartments representing the surface environment. It can be configured, through water and solid material fluxes, to represent local details in the range of ecosystem types found in the past, present and future Swedish landscapes. The approach is generic but fine tuning can be carried out using local details of the surface drainage system. The modular nature of the modelling approach means that GEMA modules can be linked to represent large scale surface drainage features over an extended domain in the landscape. System change can also be managed in GEMA, allowing a flexible and comprehensive model of the evolving landscape to be constructed. Environmental concentrations of radionuclides can be calculated and the GEMA dose pathway model provides a means of evaluating the radiological impact of radionuclide release to the surface environment. This document sets out the philosophy and details of GEMA and illustrates the functioning of the model with a range of examples featuring the recent CLIMB review of SKB's SR-Can assessment

  19. Reduced modeling of signal transduction – a modular approach

    Directory of Open Access Journals (Sweden)

    Ederer Michael

    2007-09-01

    Full Text Available Abstract Background Combinatorial complexity is a challenging problem in detailed and mechanistic mathematical modeling of signal transduction. This subject has been discussed intensively and a lot of progress has been made within the last few years. A software tool (BioNetGen was developed which allows an automatic rule-based set-up of mechanistic model equations. In many cases these models can be reduced by an exact domain-oriented lumping technique. However, the resulting models can still consist of a very large number of differential equations. Results We introduce a new reduction technique, which allows building modularized and highly reduced models. Compared to existing approaches further reduction of signal transduction networks is possible. The method also provides a new modularization criterion, which allows to dissect the model into smaller modules that are called layers and can be modeled independently. Hallmarks of the approach are conservation relations within each layer and connection of layers by signal flows instead of mass flows. The reduced model can be formulated directly without previous generation of detailed model equations. It can be understood and interpreted intuitively, as model variables are macroscopic quantities that are converted by rates following simple kinetics. The proposed technique is applicable without using complex mathematical tools and even without detailed knowledge of the mathematical background. However, we provide a detailed mathematical analysis to show performance and limitations of the method. For physiologically relevant parameter domains the transient as well as the stationary errors caused by the reduction are negligible. Conclusion The new layer based reduced modeling method allows building modularized and strongly reduced models of signal transduction networks. Reduced model equations can be directly formulated and are intuitively interpretable. Additionally, the method provides very good

  20. A nonlinear complementarity approach for the national energy modeling system

    International Nuclear Information System (INIS)

    Gabriel, S.A.; Kydes, A.S.

    1995-01-01

    The National Energy Modeling System (NEMS) is a large-scale mathematical model that computes equilibrium fuel prices and quantities in the U.S. energy sector. At present, to generate these equilibrium values, NEMS sequentially solves a collection of linear programs and nonlinear equations. The NEMS solution procedure then incorporates the solutions of these linear programs and nonlinear equations in a nonlinear Gauss-Seidel approach. The authors describe how the current version of NEMS can be formulated as a particular nonlinear complementarity problem (NCP), thereby possibly avoiding current convergence problems. In addition, they show that the NCP format is equally valid for a more general form of NEMS. They also describe several promising approaches for solving the NCP form of NEMS based on recent Newton type methods for general NCPs. These approaches share the feature of needing to solve their direction-finding subproblems only approximately. Hence, they can effectively exploit the sparsity inherent in the NEMS NCP

  1. Non-frontal Model Based Approach to Forensic Face Recognition

    NARCIS (Netherlands)

    Dutta, A.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    2012-01-01

    In this paper, we propose a non-frontal model based approach which ensures that a face recognition system always gets to compare images having similar view (or pose). This requires a virtual suspect reference set that consists of non-frontal suspect images having pose similar to the surveillance

  2. A Behavioral Decision Making Modeling Approach Towards Hedging Services

    NARCIS (Netherlands)

    Pennings, J.M.E.; Candel, M.J.J.M.; Egelkraut, T.M.

    2003-01-01

    This paper takes a behavioral approach toward the market for hedging services. A behavioral decision-making model is developed that provides insight into how and why owner-managers decide the way they do regarding hedging services. Insight into those choice processes reveals information needed by

  3. Export of microplastics from land to sea. A modelling approach

    NARCIS (Netherlands)

    Siegfried, Max; Koelmans, A.A.; Besseling, E.; Kroeze, C.

    2017-01-01

    Quantifying the transport of plastic debris from river to sea is crucial for assessing the risks of plastic debris to human health and the environment. We present a global modelling approach to analyse the composition and quantity of point-source microplastic fluxes from European rivers to the sea.

  4. The Bipolar Approach: A Model for Interdisciplinary Art History Courses.

    Science.gov (United States)

    Calabrese, John A.

    1993-01-01

    Describes a college level art history course based on the opposing concepts of Classicism and Romanticism. Contends that all creative work, such as film or architecture, can be categorized according to this bipolar model. Includes suggestions for objects to study and recommends this approach for art education at all education levels. (CFR)

  5. Teaching Modeling with Partial Differential Equations: Several Successful Approaches

    Science.gov (United States)

    Myers, Joseph; Trubatch, David; Winkel, Brian

    2008-01-01

    We discuss the introduction and teaching of partial differential equations (heat and wave equations) via modeling physical phenomena, using a new approach that encompasses constructing difference equations and implementing these in a spreadsheet, numerically solving the partial differential equations using the numerical differential equation…

  6. A review of function modeling : Approaches and applications

    NARCIS (Netherlands)

    Erden, M.S.; Komoto, H.; Van Beek, T.J.; D'Amelio, V.; Echavarria, E.; Tomiyama, T.

    2008-01-01

    This work is aimed at establishing a common frame and understanding of function modeling (FM) for our ongoing research activities. A comparative review of the literature is performed to grasp the various FM approaches with their commonalities and differences. The relations of FM with the research

  7. A novel Monte Carlo approach to hybrid local volatility models

    NARCIS (Netherlands)

    A.W. van der Stoep (Anton); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)

    2017-01-01

    textabstractWe present in a Monte Carlo simulation framework, a novel approach for the evaluation of hybrid local volatility [Risk, 1994, 7, 18–20], [Int. J. Theor. Appl. Finance, 1998, 1, 61–110] models. In particular, we consider the stochastic local volatility model—see e.g. Lipton et al. [Quant.

  8. Model-independent approach for dark matter phenomenology

    Indian Academy of Sciences (India)

    We have studied the phenomenology of dark matter at the ILC and cosmic positron experiments based on model-independent approach. We have found a strong correlation between dark matter signatures at the ILC and those in the indirect detection experiments of dark matter. Once the dark matter is discovered in the ...

  9. Model-independent approach for dark matter phenomenology ...

    Indian Academy of Sciences (India)

    Abstract. We have studied the phenomenology of dark matter at the ILC and cosmic positron experiments based on model-independent approach. We have found a strong correlation between dark matter signatures at the ILC and those in the indirect detec- tion experiments of dark matter. Once the dark matter is discovered ...

  10. The variational approach to the Glashow-Weinberg-Salam model

    International Nuclear Information System (INIS)

    Manka, R.; Sladkowski, J.

    1987-01-01

    The variational approach to the Glashow-Weinberg-Salam model, based on canonical quantization, is presented. It is shown that taking into consideration the Becchi-Rouet-Stora symmetry leads to the correct, temperature-dependent, effective potential. This generalization of the Weinberg-Coleman potential leads to a phase transition of the first kind

  11. Methodological Approach for Modeling of Multienzyme in-pot Processes

    DEFF Research Database (Denmark)

    Andrade Santacoloma, Paloma de Gracia; Roman Martinez, Alicia; Sin, Gürkan

    2011-01-01

    This paper presents a methodological approach for modeling multi-enzyme in-pot processes. The methodology is exemplified stepwise through the bi-enzymatic production of N-acetyl-D-neuraminic acid (Neu5Ac) from N-acetyl-D-glucosamine (GlcNAc). In this case study, sensitivity analysis is also used ...

  12. An Approach to Quality Estimation in Model-Based Development

    DEFF Research Database (Denmark)

    Holmegaard, Jens Peter; Koch, Peter; Ravn, Anders Peter

    2004-01-01

    We present an approach to estimation of parameters for design space exploration in Model-Based Development, where synthesis of a system is done in two stages. Component qualities like space, execution time or power consumption are defined in a repository by platform dependent values. Connectors...

  13. EXTENDE MODEL OF COMPETITIVITY THROUG APPLICATION OF NEW APPROACH DIRECTIVES

    Directory of Open Access Journals (Sweden)

    Slavko Arsovski

    2009-03-01

    Full Text Available The basic subject of this work is the model of new approach impact on quality and safety products, and competency of our companies. This work represents real hypothesis on the basis of expert's experiences, in regard to that the infrastructure with using new approach directives wasn't examined until now, it isn't known which product or industry of Serbia is related to directives of the new approach and CE mark, and it is not known which are effects of the use of the CE mark. This work should indicate existing quality reserves and product's safety, the level of possible competency improvement and increasing the profit by discharging new approach directive requires.

  14. Predicting microRNA precursors with a generalized Gaussian components based density estimation algorithm

    Directory of Open Access Journals (Sweden)

    Wu Chi-Yeh

    2010-01-01

    Full Text Available Abstract Background MicroRNAs (miRNAs are short non-coding RNA molecules, which play an important role in post-transcriptional regulation of gene expression. There have been many efforts to discover miRNA precursors (pre-miRNAs over the years. Recently, ab initio approaches have attracted more attention because they do not depend on homology information and provide broader applications than comparative approaches. Kernel based classifiers such as support vector machine (SVM are extensively adopted in these ab initio approaches due to the prediction performance they achieved. On the other hand, logic based classifiers such as decision tree, of which the constructed model is interpretable, have attracted less attention. Results This article reports the design of a predictor of pre-miRNAs with a novel kernel based classifier named the generalized Gaussian density estimator (G2DE based classifier. The G2DE is a kernel based algorithm designed to provide interpretability by utilizing a few but representative kernels for constructing the classification model. The performance of the proposed predictor has been evaluated with 692 human pre-miRNAs and has been compared with two kernel based and two logic based classifiers. The experimental results show that the proposed predictor is capable of achieving prediction performance comparable to those delivered by the prevailing kernel based classification algorithms, while providing the user with an overall picture of the distribution of the data set. Conclusion Software predictors that identify pre-miRNAs in genomic sequences have been exploited by biologists to facilitate molecular biology research in recent years. The G2DE employed in this study can deliver prediction accuracy comparable with the state-of-the-art kernel based machine learning algorithms. Furthermore, biologists can obtain valuable insights about the different characteristics of the sequences of pre-miRNAs with the models generated by the G

  15. Setting conservation management thresholds using a novel participatory modeling approach.

    Science.gov (United States)

    Addison, P F E; de Bie, K; Rumpff, L

    2015-10-01

    We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future. © 2015 The

  16. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  17. An Alternative Approach to the Extended Drude Model

    Science.gov (United States)

    Gantzler, N. J.; Dordevic, S. V.

    2018-05-01

    The original Drude model, proposed over a hundred years ago, is still used today for the analysis of optical properties of solids. Within this model, both the plasma frequency and quasiparticle scattering rate are constant, which makes the model rather inflexible. In order to circumvent this problem, the so-called extended Drude model was proposed, which allowed for the frequency dependence of both the quasiparticle scattering rate and the effective mass. In this work we will explore an alternative approach to the extended Drude model. Here, one also assumes that the quasiparticle scattering rate is frequency dependent; however, instead of the effective mass, the plasma frequency becomes frequency-dependent. This alternative model is applied to the high Tc superconductor Bi2Sr2CaCu2O8+δ (Bi2212) with Tc = 92 K, and the results are compared and contrasted with the ones obtained from the conventional extended Drude model. The results point to several advantages of this alternative approach to the extended Drude model.

  18. Multiphysics modeling using COMSOL a first principles approach

    CERN Document Server

    Pryor, Roger W

    2011-01-01

    Multiphysics Modeling Using COMSOL rapidly introduces the senior level undergraduate, graduate or professional scientist or engineer to the art and science of computerized modeling for physical systems and devices. It offers a step-by-step modeling methodology through examples that are linked to the Fundamental Laws of Physics through a First Principles Analysis approach. The text explores a breadth of multiphysics models in coordinate systems that range from 1D to 3D and introduces the readers to the numerical analysis modeling techniques employed in the COMSOL Multiphysics software. After readers have built and run the examples, they will have a much firmer understanding of the concepts, skills, and benefits acquired from the use of computerized modeling techniques to solve their current technological problems and to explore new areas of application for their particular technological areas of interest.

  19. Evaluation of Workflow Management Systems - A Meta Model Approach

    Directory of Open Access Journals (Sweden)

    Michael Rosemann

    1998-11-01

    Full Text Available The automated enactment of processes through the use of workflow management systems enables the outsourcing of the control flow from application systems. By now a large number of systems, that follow different workflow paradigms, are available. This leads to the problem of selecting the appropriate workflow management system for a given situation. In this paper we outline the benefits of a meta model approach for the evaluation and comparison of different workflow management systems. After a general introduction on the topic of meta modeling the meta models of the workflow management systems WorkParty (Siemens Nixdorf and FlowMark (IBM are compared as an example. These product specific meta models can be generalized to meta reference models, which helps to specify a workflow methodology. Exemplary, an organisational reference meta model is presented, which helps users in specifying their requirements for a workflow management system.

  20. Generalised additive modelling approach to the fermentation process of glutamate.

    Science.gov (United States)

    Liu, Chun-Bo; Li, Yun; Pan, Feng; Shi, Zhong-Ping

    2011-03-01

    In this work, generalised additive models (GAMs) were used for the first time to model the fermentation of glutamate (Glu). It was found that three fermentation parameters fermentation time (T), dissolved oxygen (DO) and oxygen uptake rate (OUR) could capture 97% variance of the production of Glu during the fermentation process through a GAM model calibrated using online data from 15 fermentation experiments. This model was applied to investigate the individual and combined effects of T, DO and OUR on the production of Glu. The conditions to optimize the fermentation process were proposed based on the simulation study from this model. Results suggested that the production of Glu can reach a high level by controlling concentration levels of DO and OUR to the proposed optimization conditions during the fermentation process. The GAM approach therefore provides an alternative way to model and optimize the fermentation process of Glu. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.

  1. Validation of Slosh Modeling Approach Using STAR-CCM+

    Science.gov (United States)

    Benson, David J.; Ng, Wanyi

    2018-01-01

    Without an adequate understanding of propellant slosh, the spacecraft attitude control system may be inadequate to control the spacecraft or there may be an unexpected loss of science observation time due to higher slosh settling times. Computational fluid dynamics (CFD) is used to model propellant slosh. STAR-CCM+ is a commercially available CFD code. This paper seeks to validate the CFD modeling approach via a comparison between STAR-CCM+ liquid slosh modeling results and experimental, empirically, and analytically derived results. The geometries examined are a bare right cylinder tank and a right cylinder with a single ring baffle.

  2. Feedback structure based entropy approach for multiple-model estimation

    Institute of Scientific and Technical Information of China (English)

    Shen-tu Han; Xue Anke; Guo Yunfei

    2013-01-01

    The variable-structure multiple-model (VSMM) approach, one of the multiple-model (MM) methods, is a popular and effective approach in handling problems with mode uncertainties. The model sequence set adaptation (MSA) is the key to design a better VSMM. However, MSA methods in the literature have big room to improve both theoretically and practically. To this end, we propose a feedback structure based entropy approach that could find the model sequence sets with the smallest size under certain conditions. The filtered data are fed back in real time and can be used by the minimum entropy (ME) based VSMM algorithms, i.e., MEVSMM. Firstly, the full Markov chains are used to achieve optimal solutions. Secondly, the myopic method together with particle filter (PF) and the challenge match algorithm are also used to achieve sub-optimal solutions, a trade-off between practicability and optimality. The numerical results show that the proposed algorithm provides not only refined model sets but also a good robustness margin and very high accuracy.

  3. Polynomial Chaos Expansion Approach to Interest Rate Models

    Directory of Open Access Journals (Sweden)

    Luca Di Persio

    2015-01-01

    Full Text Available The Polynomial Chaos Expansion (PCE technique allows us to recover a finite second-order random variable exploiting suitable linear combinations of orthogonal polynomials which are functions of a given stochastic quantity ξ, hence acting as a kind of random basis. The PCE methodology has been developed as a mathematically rigorous Uncertainty Quantification (UQ method which aims at providing reliable numerical estimates for some uncertain physical quantities defining the dynamic of certain engineering models and their related simulations. In the present paper, we use the PCE approach in order to analyze some equity and interest rate models. In particular, we take into consideration those models which are based on, for example, the Geometric Brownian Motion, the Vasicek model, and the CIR model. We present theoretical as well as related concrete numerical approximation results considering, without loss of generality, the one-dimensional case. We also provide both an efficiency study and an accuracy study of our approach by comparing its outputs with the ones obtained adopting the Monte Carlo approach, both in its standard and its enhanced version.

  4. Popularity Modeling for Mobile Apps: A Sequential Approach.

    Science.gov (United States)

    Zhu, Hengshu; Liu, Chuanren; Ge, Yong; Xiong, Hui; Chen, Enhong

    2015-07-01

    The popularity information in App stores, such as chart rankings, user ratings, and user reviews, provides an unprecedented opportunity to understand user experiences with mobile Apps, learn the process of adoption of mobile Apps, and thus enables better mobile App services. While the importance of popularity information is well recognized in the literature, the use of the popularity information for mobile App services is still fragmented and under-explored. To this end, in this paper, we propose a sequential approach based on hidden Markov model (HMM) for modeling the popularity information of mobile Apps toward mobile App services. Specifically, we first propose a popularity based HMM (PHMM) to model the sequences of the heterogeneous popularity observations of mobile Apps. Then, we introduce a bipartite based method to precluster the popularity observations. This can help to learn the parameters and initial values of the PHMM efficiently. Furthermore, we demonstrate that the PHMM is a general model and can be applicable for various mobile App services, such as trend based App recommendation, rating and review spam detection, and ranking fraud detection. Finally, we validate our approach on two real-world data sets collected from the Apple Appstore. Experimental results clearly validate both the effectiveness and efficiency of the proposed popularity modeling approach.

  5. Common modelling approaches for training simulators for nuclear power plants

    International Nuclear Information System (INIS)

    1990-02-01

    Training simulators for nuclear power plant operating staff have gained increasing importance over the last twenty years. One of the recommendations of the 1983 IAEA Specialists' Meeting on Nuclear Power Plant Training Simulators in Helsinki was to organize a Co-ordinated Research Programme (CRP) on some aspects of training simulators. The goal statement was: ''To establish and maintain a common approach to modelling for nuclear training simulators based on defined training requirements''. Before adapting this goal statement, the participants considered many alternatives for defining the common aspects of training simulator models, such as the programming language used, the nature of the simulator computer system, the size of the simulation computers, the scope of simulation. The participants agreed that it was the training requirements that defined the need for a simulator, the scope of models and hence the type of computer complex that was required, the criteria for fidelity and verification, and was therefore the most appropriate basis for the commonality of modelling approaches. It should be noted that the Co-ordinated Research Programme was restricted, for a variety of reasons, to consider only a few aspects of training simulators. This report reflects these limitations, and covers only the topics considered within the scope of the programme. The information in this document is intended as an aid for operating organizations to identify possible modelling approaches for training simulators for nuclear power plants. 33 refs

  6. Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria

    Science.gov (United States)

    Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong

    2017-08-01

    In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.

  7. An approach to ductile fracture resistance modelling in pipeline steels

    Energy Technology Data Exchange (ETDEWEB)

    Pussegoda, L.N.; Fredj, A. [BMT Fleet Technology Ltd., Kanata (Canada)

    2009-07-01

    Ductile fracture resistance studies of high grade steels in the pipeline industry often included analyses of the crack tip opening angle (CTOA) parameter using 3-point bend steel specimens. The CTOA is a function of specimen ligament size in high grade materials. Other resistance measurements may include steady state fracture propagation energy, critical fracture strain, and the adoption of damage mechanisms. Modelling approaches for crack propagation were discussed in this abstract. Tension tests were used to calibrate damage model parameters. Results from the tests were then applied to the crack propagation in a 3-point bend specimen using modern 1980 vintage steels. Limitations and approaches to overcome the difficulties associated with crack propagation modelling were discussed.

  8. High dimensions - a new approach to fermionic lattice models

    International Nuclear Information System (INIS)

    Vollhardt, D.

    1991-01-01

    The limit of high spatial dimensions d, which is well-established in the theory of classical and localized spin models, is shown to be a fruitful approach also to itinerant fermion systems, such as the Hubbard model and the periodic Anderson model. Many investigations which are probability difficult in finite dimensions, become tractable in d=∞. At the same time essential features of systems in d=3 and even lower dimensions are very well described by the results obtained in d=∞. A wide range of applications of this new concept (e.g., in perturbation theory, Fermi liquid theory, variational approaches, exact results, etc.) is discussed and the state-of-the-art is reviewed. (orig.)

  9. A fuzzy approach for modelling radionuclide in lake system

    International Nuclear Information System (INIS)

    Desai, H.K.; Christian, R.A.; Banerjee, J.; Patra, A.K.

    2013-01-01

    Radioactive liquid waste is generated during operation and maintenance of Pressurised Heavy Water Reactors (PHWRs). Generally low level liquid waste is diluted and then discharged into the near by water-body through blowdown water discharge line as per the standard waste management practice. The effluents from nuclear installations are treated adequately and then released in a controlled manner under strict compliance of discharge criteria. An attempt was made to predict the concentration of 3 H released from Kakrapar Atomic Power Station at Ratania Regulator, about 2.5 km away from the discharge point, where human exposure is expected. Scarcity of data and complex geometry of the lake prompted the use of Heuristic approach. Under this condition, Fuzzy rule based approach was adopted to develop a model, which could predict 3 H concentration at Ratania Regulator. Three hundred data were generated for developing the fuzzy rules, in which input parameters were water flow from lake and 3 H concentration at discharge point. The Output was 3 H concentration at Ratania Regulator. These data points were generated by multiple regression analysis of the original data. Again by using same methodology hundred data were generated for the validation of the model, which were compared against the predicted output generated by using Fuzzy Rule based approach. Root Mean Square Error of the model came out to be 1.95, which showed good agreement by Fuzzy model of natural ecosystem. -- Highlights: • Uncommon approach (Fuzzy Rule Base) of modelling radionuclide dispersion in Lake. • Predicts 3 H released from Kakrapar Atomic Power Station at a point of human exposure. • RMSE of fuzzy model is 1.95, which means, it has well imitated natural ecosystem

  10. Modeling energy fluxes in heterogeneous landscapes employing a mosaic approach

    Science.gov (United States)

    Klein, Christian; Thieme, Christoph; Priesack, Eckart

    2015-04-01

    Recent studies show that uncertainties in regional and global climate and weather simulations are partly due to inadequate descriptions of the energy flux exchanges between the land surface and the atmosphere. One major shortcoming is the limitation of the grid-cell resolution, which is recommended to be about at least 3x3 km² in most models due to limitations in the model physics. To represent each individual grid cell most models select one dominant soil type and one dominant land use type. This resolution, however, is often too coarse in regions where the spatial diversity of soil and land use types are high, e.g. in Central Europe. An elegant method to avoid the shortcoming of grid cell resolution is the so called mosaic approach. This approach is part of the recently developed ecosystem model framework Expert-N 5.0. The aim of this study was to analyze the impact of the characteristics of two managed fields, planted with winter wheat and potato, on the near surface soil moistures and on the near surface energy flux exchanges of the soil-plant-atmosphere interface. The simulated energy fluxes were compared with eddy flux tower measurements between the respective fields at the research farm Scheyern, North-West of Munich, Germany. To perform these simulations, we coupled the ecosystem model Expert-N 5.0 to an analytical footprint model. The coupled model system has the ability to calculate the mixing ratio of the surface energy fluxes at a given point within one grid cell (in this case at the flux tower between the two fields). This approach accounts for the differences of the two soil types, of land use managements, and of canopy properties due to footprint size dynamics. Our preliminary simulation results show that a mosaic approach can improve modeling and analyzing energy fluxes when the land surface is heterogeneous. In this case our applied method is a promising approach to extend weather and climate models on the regional and on the global scale.

  11. A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja E. M.

    2015-11-21

    Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  12. A global sensitivity analysis approach for morphogenesis models.

    Science.gov (United States)

    Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G

    2015-11-21

    Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  13. Towards a CPN-Based Modelling Approach for Reconciling Verification and Implementation of Protocol Models

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge; Kristensen, Lars Michael

    2013-01-01

    Formal modelling of protocols is often aimed at one specific purpose such as verification or automatically generating an implementation. This leads to models that are useful for one purpose, but not for others. Being able to derive models for verification and implementation from a single model...... is beneficial both in terms of reduced total modelling effort and confidence that the verification results are valid also for the implementation model. In this paper we introduce the concept of a descriptive specification model and an approach based on refining a descriptive model to target both verification...... how this model can be refined to target both verification and implementation....

  14. Unraveling the Mechanisms of Manual Therapy: Modeling an Approach.

    Science.gov (United States)

    Bialosky, Joel E; Beneciuk, Jason M; Bishop, Mark D; Coronado, Rogelio A; Penza, Charles W; Simon, Corey B; George, Steven Z

    2018-01-01

    Synopsis Manual therapy interventions are popular among individual health care providers and their patients; however, systematic reviews do not strongly support their effectiveness. Small treatment effect sizes of manual therapy interventions may result from a "one-size-fits-all" approach to treatment. Mechanistic-based treatment approaches to manual therapy offer an intriguing alternative for identifying patients likely to respond to manual therapy. However, the current lack of knowledge of the mechanisms through which manual therapy interventions inhibit pain limits such an approach. The nature of manual therapy interventions further confounds such an approach, as the related mechanisms are likely a complex interaction of factors related to the patient, the provider, and the environment in which the intervention occurs. Therefore, a model to guide both study design and the interpretation of findings is necessary. We have previously proposed a model suggesting that the mechanical force from a manual therapy intervention results in systemic neurophysiological responses leading to pain inhibition. In this clinical commentary, we provide a narrative appraisal of the model and recommendations to advance the study of manual therapy mechanisms. J Orthop Sports Phys Ther 2018;48(1):8-18. doi:10.2519/jospt.2018.7476.

  15. Implementation of Reseptive Esteemy Approach Model in Learning Reading Literature

    Directory of Open Access Journals (Sweden)

    Titin Nurhayatin

    2017-03-01

    Full Text Available Research on the implementation of aesthetic model of receptive aesthetic approach in learning to read the literature on the background of the low quality of results and learning process of Indonesian language, especially the study of literature. Students as prospective teachers of Indonesian language are expected to have the ability to speak, have literature, and their learning in a balanced manner in accordance with the curriculum demands. This study examines the effectiveness, quality, acceptability, and sustainability of the aesthetic approach of receptions in improving students' literary skills. Based on these problems, this study is expected to produce a learning model that contributes high in improving the quality of results and the process of learning literature. This research was conducted on the students of Language Education Program, Indonesian Literature and Regional FKIP Pasundan University. The research method used is experiment with randomized type pretest-posttest control group design. Based on preliminary and final test data obtained in the experimental class the average preliminary test was 55.86 and the average final test was 76.75. From the preliminary test data in the control class the average score was 55.07 and the average final test was 68.76. These data suggest that there is a greater increase in grades in the experimental class using the aesthetic approach of the reception compared with the increase in values in the control class using a conventional approach. The results show that the aesthetic approach of receptions is more effective than the conventional approach in literary reading. Based on observations, acceptance, and views of sustainability, the aesthetic approach of receptions in literary learning is expected to be an alternative and solution in overcoming the problems of literary learning and improving the quality of Indonesian learning outcomes and learning process.

  16. Color Independent Components Based SIFT Descriptors for Object/Scene Classification

    Science.gov (United States)

    Ai, Dan-Ni; Han, Xian-Hua; Ruan, Xiang; Chen, Yen-Wei

    In this paper, we present a novel color independent components based SIFT descriptor (termed CIC-SIFT) for object/scene classification. We first learn an efficient color transformation matrix based on independent component analysis (ICA), which is adaptive to each category in a database. The ICA-based color transformation can enhance contrast between the objects and the background in an image. Then we compute CIC-SIFT descriptors over all three transformed color independent components. Since the ICA-based color transformation can boost the objects and suppress the background, the proposed CIC-SIFT can extract more effective and discriminative local features for object/scene classification. The comparison is performed among seven SIFT descriptors, and the experimental classification results show that our proposed CIC-SIFT is superior to other conventional SIFT descriptors.

  17. Polynomial fuzzy model-based approach for underactuated surface vessels

    DEFF Research Database (Denmark)

    Khooban, Mohammad Hassan; Vafamand, Navid; Dragicevic, Tomislav

    2018-01-01

    The main goal of this study is to introduce a new polynomial fuzzy model-based structure for a class of marine systems with non-linear and polynomial dynamics. The suggested technique relies on a polynomial Takagi–Sugeno (T–S) fuzzy modelling, a polynomial dynamic parallel distributed compensation...... surface vessel (USV). Additionally, in order to overcome the USV control challenges, including the USV un-modelled dynamics, complex nonlinear dynamics, external disturbances and parameter uncertainties, the polynomial fuzzy model representation is adopted. Moreover, the USV-based control structure...... and a sum-of-squares (SOS) decomposition. The new proposed approach is a generalisation of the standard T–S fuzzy models and linear matrix inequality which indicated its effectiveness in decreasing the tracking time and increasing the efficiency of the robust tracking control problem for an underactuated...

  18. Bayesian approach to errors-in-variables in regression models

    Science.gov (United States)

    Rozliman, Nur Aainaa; Ibrahim, Adriana Irawati Nur; Yunus, Rossita Mohammad

    2017-05-01

    In many applications and experiments, data sets are often contaminated with error or mismeasured covariates. When at least one of the covariates in a model is measured with error, Errors-in-Variables (EIV) model can be used. Measurement error, when not corrected, would cause misleading statistical inferences and analysis. Therefore, our goal is to examine the relationship of the outcome variable and the unobserved exposure variable given the observed mismeasured surrogate by applying the Bayesian formulation to the EIV model. We shall extend the flexible parametric method proposed by Hossain and Gustafson (2009) to another nonlinear regression model which is the Poisson regression model. We shall then illustrate the application of this approach via a simulation study using Markov chain Monte Carlo sampling methods.

  19. A hidden Markov model approach to neuron firing patterns.

    Science.gov (United States)

    Camproux, A C; Saunier, F; Chouvet, G; Thalabard, J C; Thomas, G

    1996-11-01

    Analysis and characterization of neuronal discharge patterns are of interest to neurophysiologists and neuropharmacologists. In this paper we present a hidden Markov model approach to modeling single neuron electrical activity. Basically the model assumes that each interspike interval corresponds to one of several possible states of the neuron. Fitting the model to experimental series of interspike intervals by maximum likelihood allows estimation of the number of possible underlying neuron states, the probability density functions of interspike intervals corresponding to each state, and the transition probabilities between states. We present an application to the analysis of recordings of a locus coeruleus neuron under three pharmacological conditions. The model distinguishes two states during halothane anesthesia and during recovery from halothane anesthesia, and four states after administration of clonidine. The transition probabilities yield additional insights into the mechanisms of neuron firing.

  20. A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework.

    Science.gov (United States)

    Wei, Shengjing; Chen, Xiang; Yang, Xidong; Cao, Shuai; Zhang, Xu

    2016-04-19

    Sign language recognition (SLR) can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG) sensors, accelerometers (ACC), and gyroscopes (GYRO). In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL) sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set) suggested by two reference subjects, (82.6 ± 13.2)% and (79.7 ± 13.4)% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7)% and (86.3 ± 13.7)% when the training set included 50~60 gestures (about half of the target gesture set). The proposed framework can significantly reduce the user's training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.

  1. A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework

    Directory of Open Access Journals (Sweden)

    Shengjing Wei

    2016-04-01

    Full Text Available Sign language recognition (SLR can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG sensors, accelerometers (ACC, and gyroscopes (GYRO. In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set suggested by two reference subjects, (82.6 ± 13.2% and (79.7 ± 13.4% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7% and (86.3 ± 13.7% when the training set included 50~60 gestures (about half of the target gesture set. The proposed framework can significantly reduce the user’s training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.

  2. Practical modeling approaches for geological storage of carbon dioxide.

    Science.gov (United States)

    Celia, Michael A; Nordbotten, Jan M

    2009-01-01

    The relentless increase of anthropogenic carbon dioxide emissions and the associated concerns about climate change have motivated new ideas about carbon-constrained energy production. One technological approach to control carbon dioxide emissions is carbon capture and storage, or CCS. The underlying idea of CCS is to capture the carbon before it emitted to the atmosphere and store it somewhere other than the atmosphere. Currently, the most attractive option for large-scale storage is in deep geological formations, including deep saline aquifers. Many physical and chemical processes can affect the fate of the injected CO2, with the overall mathematical description of the complete system becoming very complex. Our approach to the problem has been to reduce complexity as much as possible, so that we can focus on the few truly important questions about the injected CO2, most of which involve leakage out of the injection formation. Toward this end, we have established a set of simplifying assumptions that allow us to derive simplified models, which can be solved numerically or, for the most simplified cases, analytically. These simplified models allow calculation of solutions to large-scale injection and leakage problems in ways that traditional multicomponent multiphase simulators cannot. Such simplified models provide important tools for system analysis, screening calculations, and overall risk-assessment calculations. We believe this is a practical and important approach to model geological storage of carbon dioxide. It also serves as an example of how complex systems can be simplified while retaining the essential physics of the problem.

  3. A fuzzy approach for modelling radionuclide in lake system.

    Science.gov (United States)

    Desai, H K; Christian, R A; Banerjee, J; Patra, A K

    2013-10-01

    Radioactive liquid waste is generated during operation and maintenance of Pressurised Heavy Water Reactors (PHWRs). Generally low level liquid waste is diluted and then discharged into the near by water-body through blowdown water discharge line as per the standard waste management practice. The effluents from nuclear installations are treated adequately and then released in a controlled manner under strict compliance of discharge criteria. An attempt was made to predict the concentration of (3)H released from Kakrapar Atomic Power Station at Ratania Regulator, about 2.5 km away from the discharge point, where human exposure is expected. Scarcity of data and complex geometry of the lake prompted the use of Heuristic approach. Under this condition, Fuzzy rule based approach was adopted to develop a model, which could predict (3)H concentration at Ratania Regulator. Three hundred data were generated for developing the fuzzy rules, in which input parameters were water flow from lake and (3)H concentration at discharge point. The Output was (3)H concentration at Ratania Regulator. These data points were generated by multiple regression analysis of the original data. Again by using same methodology hundred data were generated for the validation of the model, which were compared against the predicted output generated by using Fuzzy Rule based approach. Root Mean Square Error of the model came out to be 1.95, which showed good agreement by Fuzzy model of natural ecosystem. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach

    Directory of Open Access Journals (Sweden)

    W. Bastiaan Kleijn

    2005-06-01

    Full Text Available Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel coding.

  5. A modal approach to modeling spatially distributed vibration energy dissipation.

    Energy Technology Data Exchange (ETDEWEB)

    Segalman, Daniel Joseph

    2010-08-01

    The nonlinear behavior of mechanical joints is a confounding element in modeling the dynamic response of structures. Though there has been some progress in recent years in modeling individual joints, modeling the full structure with myriad frictional interfaces has remained an obstinate challenge. A strategy is suggested for structural dynamics modeling that can account for the combined effect of interface friction distributed spatially about the structure. This approach accommodates the following observations: (1) At small to modest amplitudes, the nonlinearity of jointed structures is manifest primarily in the energy dissipation - visible as vibration damping; (2) Correspondingly, measured vibration modes do not change significantly with amplitude; and (3) Significant coupling among the modes does not appear to result at modest amplitudes. The mathematical approach presented here postulates the preservation of linear modes and invests all the nonlinearity in the evolution of the modal coordinates. The constitutive form selected is one that works well in modeling spatially discrete joints. When compared against a mathematical truth model, the distributed dissipation approximation performs well.

  6. A Bayesian Approach for Structural Learning with Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Cen Li

    2002-01-01

    Full Text Available Hidden Markov Models(HMM have proved to be a successful modeling paradigm for dynamic and spatial processes in many domains, such as speech recognition, genomics, and general sequence alignment. Typically, in these applications, the model structures are predefined by domain experts. Therefore, the HMM learning problem focuses on the learning of the parameter values of the model to fit the given data sequences. However, when one considers other domains, such as, economics and physiology, model structure capturing the system dynamic behavior is not available. In order to successfully apply the HMM methodology in these domains, it is important that a mechanism is available for automatically deriving the model structure from the data. This paper presents a HMM learning procedure that simultaneously learns the model structure and the maximum likelihood parameter values of a HMM from data. The HMM model structures are derived based on the Bayesian model selection methodology. In addition, we introduce a new initialization procedure for HMM parameter value estimation based on the K-means clustering method. Experimental results with artificially generated data show the effectiveness of the approach.

  7. A Novel Approach to Implement Takagi-Sugeno Fuzzy Models.

    Science.gov (United States)

    Chang, Chia-Wen; Tao, Chin-Wang

    2017-09-01

    This paper proposes new algorithms based on the fuzzy c-regressing model algorithm for Takagi-Sugeno (T-S) fuzzy modeling of the complex nonlinear systems. A fuzzy c-regression state model (FCRSM) algorithm is a T-S fuzzy model in which the functional antecedent and the state-space-model-type consequent are considered with the available input-output data. The antecedent and consequent forms of the proposed FCRSM consists mainly of two advantages: one is that the FCRSM has low computation load due to only one input variable is considered in the antecedent part; another is that the unknown system can be modeled to not only the polynomial form but also the state-space form. Moreover, the FCRSM can be extended to FCRSM-ND and FCRSM-Free algorithms. An algorithm FCRSM-ND is presented to find the T-S fuzzy state-space model of the nonlinear system when the input-output data cannot be precollected and an assumed effective controller is available. In the practical applications, the mathematical model of controller may be hard to be obtained. In this case, an online tuning algorithm, FCRSM-FREE, is designed such that the parameters of a T-S fuzzy controller and the T-S fuzzy state model of an unknown system can be online tuned simultaneously. Four numerical simulations are given to demonstrate the effectiveness of the proposed approach.

  8. Approach to Organizational Structure Modelling in Construction Companies

    Directory of Open Access Journals (Sweden)

    Ilin Igor V.

    2016-01-01

    Full Text Available Effective management system is one of the key factors of business success nowadays. Construction companies usually have a portfolio of independent projects running at the same time. Thus it is reasonable to take into account project orientation of such kind of business while designing the construction companies’ management system, which main components are business process system and organizational structure. The paper describes the management structure designing approach, based on the project-oriented nature of the construction projects, and propose a model of the organizational structure for the construction company. Application of the proposed approach will enable to assign responsibilities within the organizational structure in construction projects effectively and thus to shorten the time for projects allocation and to provide its smoother running. The practical case of using the approach also provided in the paper.

  9. A Composite Modelling Approach to Decision Support by the Use of the CBA-DK Model

    DEFF Research Database (Denmark)

    Barfod, Michael Bruhn; Salling, Kim Bang; Leleur, Steen

    2007-01-01

    This paper presents a decision support system for assessment of transport infrastructure projects. The composite modelling approach, COSIMA, combines a cost-benefit analysis by use of the CBA-DK model with multi-criteria analysis applying the AHP and SMARTER techniques. The modelling uncertaintie...

  10. Thermal Analysis of the Driving Component Based on the Thermal Network Method in a Lunar Drilling System and Experimental Verification

    Directory of Open Access Journals (Sweden)

    Dewei Tang

    2017-03-01

    Full Text Available The main task of the third Chinese lunar exploration project is to obtain soil samples that are greater than two meters in length and to acquire bedding information from the surface of the moon. The driving component is the power output unit of the drilling system in the lander; it provides drilling power for core drilling tools. High temperatures can cause the sensors, permanent magnet, gears, and bearings to suffer irreversible damage. In this paper, a thermal analysis model for this driving component, based on the thermal network method (TNM was established and the model was solved using the quasi-Newton method. A vacuum test platform was built and an experimental verification method (EVM was applied to measure the surface temperature of the driving component. Then, the TNM was optimized, based on the principle of heat distribution. Through comparative analyses, the reasonableness of the TNM is validated. Finally, the static temperature field of the driving component was predicted and the “safe working times” of every mode are given.

  11. The place of quantitative energy models in a prospective approach

    International Nuclear Information System (INIS)

    Taverdet-Popiolek, N.

    2009-01-01

    Futurology above all depends on having the right mind set. Gaston Berger summarizes the prospective approach in 5 five main thrusts: prepare for the distant future, be open-minded (have a systems and multidisciplinary approach), carry out in-depth analyzes (draw out actors which are really determinant or the future, as well as established shed trends), take risks (imagine risky but flexible projects) and finally think about humanity, futurology being a technique at the service of man to help him build a desirable future. On the other hand, forecasting is based on quantified models so as to deduce 'conclusions' about the future. In the field of energy, models are used to draw up scenarios which allow, for instance, measuring medium or long term effects of energy policies on greenhouse gas emissions or global welfare. Scenarios are shaped by the model's inputs (parameters, sets of assumptions) and outputs. Resorting to a model or projecting by scenario is useful in a prospective approach as it ensures coherence for most of the variables that have been identified through systems analysis and that the mind on its own has difficulty to grasp. Interpretation of each scenario must be carried out in the light o the underlying framework of assumptions (the backdrop), developed during the prospective stage. When the horizon is far away (very long-term), the worlds imagined by the futurologist contain breaks (technological, behavioural and organizational) which are hard to integrate into the models. It is here that the main limit for the use of models in futurology is located. (author)

  12. Application of declarative modeling approaches for external events

    International Nuclear Information System (INIS)

    Anoba, R.C.

    2005-01-01

    Probabilistic Safety Assessments (PSAs) are increasingly being used as a tool for supporting the acceptability of design, procurement, construction, operation, and maintenance activities at Nuclear Power Plants. Since the issuance of Generic Letter 88-20 and subsequent IPE/IPEEE assessments, the NRC has issued several Regulatory Guides such as RG 1.174 to describe the use of PSA in risk-informed regulation activities. Most PSA have the capability to address internal events including internal floods. As the more demands are being placed for using the PSA to support risk-informed applications, there has been a growing need to integrate other eternal events (Seismic, Fire, etc.) into the logic models. Most external events involve spatial dependencies and usually impact the logic models at the component level. Therefore, manual insertion of external events impacts into a complex integrated fault tree model may be too cumbersome for routine uses of the PSA. Within the past year, a declarative modeling approach has been developed to automate the injection of external events into the PSA. The intent of this paper is to introduce the concept of declarative modeling in the context of external event applications. A declarative modeling approach involves the definition of rules for injection of external event impacts into the fault tree logic. A software tool such as the EPRI's XInit program can be used to interpret the pre-defined rules and automatically inject external event elements into the PSA. The injection process can easily be repeated, as required, to address plant changes, sensitivity issues, changes in boundary conditions, etc. External event elements may include fire initiating events, seismic initiating events, seismic fragilities, fire-induced hot short events, special human failure events, etc. This approach has been applied at a number of US nuclear power plants including a nuclear power plant in Romania. (authors)

  13. Understanding complex urban systems multidisciplinary approaches to modeling

    CERN Document Server

    Gurr, Jens; Schmidt, J

    2014-01-01

    Understanding Complex Urban Systems takes as its point of departure the insight that the challenges of global urbanization and the complexity of urban systems cannot be understood – let alone ‘managed’ – by sectoral and disciplinary approaches alone. But while there has recently been significant progress in broadening and refining the methodologies for the quantitative modeling of complex urban systems, in deepening the theoretical understanding of cities as complex systems, or in illuminating the implications for urban planning, there is still a lack of well-founded conceptual thinking on the methodological foundations and the strategies of modeling urban complexity across the disciplines. Bringing together experts from the fields of urban and spatial planning, ecology, urban geography, real estate analysis, organizational cybernetics, stochastic optimization, and literary studies, as well as specialists in various systems approaches and in transdisciplinary methodologies of urban analysis, the volum...

  14. A Variational Approach to the Modeling of MIMO Systems

    Directory of Open Access Journals (Sweden)

    Jraifi A

    2007-01-01

    Full Text Available Motivated by the study of the optimization of the quality of service for multiple input multiple output (MIMO systems in 3G (third generation, we develop a method for modeling MIMO channel . This method, which uses a statistical approach, is based on a variational form of the usual channel equation. The proposed equation is given by with scalar variable . Minimum distance of received vectors is used as the random variable to model MIMO channel. This variable is of crucial importance for the performance of the transmission system as it captures the degree of interference between neighbors vectors. Then, we use this approach to compute numerically the total probability of errors with respect to signal-to-noise ratio (SNR and then predict the numbers of antennas. By fixing SNR variable to a specific value, we extract informations on the optimal numbers of MIMO antennas.

  15. On quantum approach to modeling of plasmon photovoltaic effect

    DEFF Research Database (Denmark)

    Kluczyk, Katarzyna; David, Christin; Jacak, Witold Aleksander

    2017-01-01

    Surface plasmons in metallic nanostructures including metallically nanomodified solar cells are conventionally studied and modeled by application of the Mie approach to plasmons or by the finite element solution of differential Maxwell equations with imposed boundary and material constraints (e...... to the semiconductor solar cell mediated by surface plasmons in metallic nanoparticles deposited on the top of the battery. In addition, short-ranged electron-electron interaction in metals is discussed in the framework of the semiclassical hydrodynamic model. The significance of the related quantum corrections......-aided photovoltaic phenomena. Quantum corrections considerably improve both the Mie and COMSOL approaches in this case. We present the semiclassical random phase approximation description of plasmons in metallic nanoparticles and apply the quantumFermi golden rule scheme to assess the sunlight energy transfer...

  16. Innovation Networks New Approaches in Modelling and Analyzing

    CERN Document Server

    Pyka, Andreas

    2009-01-01

    The science of graphs and networks has become by now a well-established tool for modelling and analyzing a variety of systems with a large number of interacting components. Starting from the physical sciences, applications have spread rapidly to the natural and social sciences, as well as to economics, and are now further extended, in this volume, to the concept of innovations, viewed broadly. In an abstract, systems-theoretical approach, innovation can be understood as a critical event which destabilizes the current state of the system, and results in a new process of self-organization leading to a new stable state. The contributions to this anthology address different aspects of the relationship between innovation and networks. The various chapters incorporate approaches in evolutionary economics, agent-based modeling, social network analysis and econophysics and explore the epistemic tension between insights into economics and society-related processes, and the insights into new forms of complex dynamics.

  17. Hypercompetitive Environments: An Agent-based model approach

    Science.gov (United States)

    Dias, Manuel; Araújo, Tanya

    Information technology (IT) environments are characterized by complex changes and rapid evolution. Globalization and the spread of technological innovation have increased the need for new strategic information resources, both from individual firms and management environments. Improvements in multidisciplinary methods and, particularly, the availability of powerful computational tools, are giving researchers an increasing opportunity to investigate management environments in their true complex nature. The adoption of a complex systems approach allows for modeling business strategies from a bottom-up perspective — understood as resulting from repeated and local interaction of economic agents — without disregarding the consequences of the business strategies themselves to individual behavior of enterprises, emergence of interaction patterns between firms and management environments. Agent-based models are at the leading approach of this attempt.

  18. Modeling fabrication of nuclear components: An integrative approach

    Energy Technology Data Exchange (ETDEWEB)

    Hench, K.W.

    1996-08-01

    Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components in an environment of intense regulation and shrinking budgets. This dissertation presents an integrative two-stage approach to modeling the casting operation for fabrication of nuclear weapon primary components. The first stage optimizes personnel radiation exposure for the casting operation layout by modeling the operation as a facility layout problem formulated as a quadratic assignment problem. The solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units.

  19. Injury prevention risk communication: A mental models approach

    DEFF Research Database (Denmark)

    Austin, Laurel Cecelia; Fischhoff, Baruch

    2012-01-01

    fail to see risks, do not make use of available protective interventions or misjudge the effectiveness of protective measures. If these misunderstandings can be reduced through context-appropriate risk communications, then their improved mental models may help people to engage more effectively...... and create an expert model of the risk situation, interviewing lay people to elicit their comparable mental models, and developing and evaluating communication interventions designed to close the gaps between lay people and experts. This paper reviews the theory and method behind this research stream...... interventions on the most critical opportunities to reduce risks. That research often seeks to identify the ‘mental models’ that underlie individuals' interpretations of their circumstances and the outcomes of possible actions. In the context of injury prevention, a mental models approach would ask why people...

  20. Variational approach to thermal masses in compactified models

    Energy Technology Data Exchange (ETDEWEB)

    Dominici, Daniele [Dipartimento di Fisica e Astronomia Università di Firenze and INFN - Sezione di Firenze,Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Roditi, Itzhak [Centro Brasileiro de Pesquisas Físicas - CBPF/MCT,Rua Dr. Xavier Sigaud 150, 22290-180, Rio de Janeiro, RJ (Brazil)

    2015-08-20

    We investigate by means of a variational approach the effective potential of a 5DU(1) scalar model at finite temperature and compactified on S{sup 1} and S{sup 1}/Z{sub 2} as well as the corresponding 4D model obtained through a trivial dimensional reduction. We are particularly interested in the behavior of the thermal masses of the scalar field with respect to the Wilson line phase and the results obtained are compared with those coming from a one-loop effective potential calculation. We also explore the nature of the phase transition.

  1. Surrogate based approaches to parameter inference in ocean models

    KAUST Repository

    Knio, Omar

    2016-01-06

    This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.

  2. Risk Modeling Approaches in Terms of Volatility Banking Transactions

    Directory of Open Access Journals (Sweden)

    Angelica Cucşa (Stratulat

    2016-01-01

    Full Text Available The inseparability of risk and banking activity is one demonstrated ever since banking systems, the importance of the topic being presend in current life and future equally in the development of banking sector. Banking sector development is done in the context of the constraints of nature and number of existing risks and those that may arise, and serves as limiting the risk of banking activity. We intend to develop approaches to analyse risk through mathematical models by also developing a model for the Romanian capital market 10 active trading picks that will test investor reaction in controlled and uncontrolled conditions of risk aggregated with harmonised factors.

  3. The experimental and shell model approach to 100Sn

    International Nuclear Information System (INIS)

    Grawe, H.; Maier, K.H.; Fitzgerald, J.B.; Heese, J.; Spohr, K.; Schubart, R.; Gorska, M.; Rejmund, M.

    1995-01-01

    The present status of experimental approach to 100 Sn and its shell model structure is given. New developments in experimental techniques, such as low background isomer spectroscopy and charged particle detection in 4π are surveyed. Based on recent experimental data shell model calculations are used to predict the structure of the single- and two-nucleon neighbours of 100 Sn. The results are compared to the systematic of Coulomb energies and spin-orbit splitting and discussed with respect to future experiments. (author). 51 refs, 11 figs, 1 tab

  4. THE SIGNAL APPROACH TO MODELLING THE BALANCE OF PAYMENT CRISIS

    Directory of Open Access Journals (Sweden)

    O. Chernyak

    2016-12-01

    Full Text Available The paper considers and presents synthesis of theoretical models of balance of payment crisis and investigates the most effective ways to model the crisis in Ukraine. For mathematical formalization of balance of payment crisis, comparative analysis of the effectiveness of different calculation methods of Exchange Market Pressure Index was performed. A set of indicators that signal the growing likelihood of balance of payments crisis was defined using signal approach. With the help of minimization function thresholds indicators were selected, the crossing of which signalize increase in the probability of balance of payment crisis.

  5. Surrogate based approaches to parameter inference in ocean models

    KAUST Repository

    Knio, Omar

    2016-01-01

    This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.

  6. An interdisciplinary approach for earthquake modelling and forecasting

    Science.gov (United States)

    Han, P.; Zhuang, J.; Hattori, K.; Ogata, Y.

    2016-12-01

    Earthquake is one of the most serious disasters, which may cause heavy casualties and economic losses. Especially in the past two decades, huge/mega earthquakes have hit many countries. Effective earthquake forecasting (including time, location, and magnitude) becomes extremely important and urgent. To date, various heuristically derived algorithms have been developed for forecasting earthquakes. Generally, they can be classified into two types: catalog-based approaches and non-catalog-based approaches. Thanks to the rapid development of statistical seismology in the past 30 years, now we are able to evaluate the performances of these earthquake forecast approaches quantitatively. Although a certain amount of precursory information is available in both earthquake catalogs and non-catalog observations, the earthquake forecast is still far from satisfactory. In most case, the precursory phenomena were studied individually. An earthquake model that combines self-exciting and mutually exciting elements was developed by Ogata and Utsu from the Hawkes process. The core idea of this combined model is that the status of the event at present is controlled by the event itself (self-exciting) and all the external factors (mutually exciting) in the past. In essence, the conditional intensity function is a time-varying Poisson process with rate λ(t), which is composed of the background rate, the self-exciting term (the information from past seismic events), and the external excitation term (the information from past non-seismic observations). This model shows us a way to integrate the catalog-based forecast and non-catalog-based forecast. Against this background, we are trying to develop a new earthquake forecast model which combines catalog-based and non-catalog-based approaches.

  7. Modeling workforce demand in North Dakota: a System Dynamics approach

    OpenAIRE

    Muminova, Adiba

    2015-01-01

    This study investigates the dynamics behind the workforce demand and attempts to predict the potential effects of future changes in oil prices on workforce demand in North Dakota. The study attempts to join System Dynamics and Input-Output models in order to overcome shortcomings in both of the approaches and gain a more complete understanding of the issue of workforce demand. A system dynamics simulation of workforce demand within different economic sector...

  8. CFD Modeling of Wall Steam Condensation: Two-Phase Flow Approach versus Homogeneous Flow Approach

    International Nuclear Information System (INIS)

    Mimouni, S.; Mechitoua, N.; Foissac, A.; Hassanaly, M.; Ouraou, M.

    2011-01-01

    The present work is focused on the condensation heat transfer that plays a dominant role in many accident scenarios postulated to occur in the containment of nuclear reactors. The study compares a general multiphase approach implemented in NEPTUNE C FD with a homogeneous model, of widespread use for engineering studies, implemented in Code S aturne. The model implemented in NEPTUNE C FD assumes that liquid droplets form along the wall within nucleation sites. Vapor condensation on droplets makes them grow. Once the droplet diameter reaches a critical value, gravitational forces compensate surface tension force and then droplets slide over the wall and form a liquid film. This approach allows taking into account simultaneously the mechanical drift between the droplet and the gas, the heat and mass transfer on droplets in the core of the flow and the condensation/evaporation phenomena on the walls. As concern the homogeneous approach, the motion of the liquid film due to the gravitational forces is neglected, as well as the volume occupied by the liquid. Both condensation models and compressible procedures are validated and compared to experimental data provided by the TOSQAN ISP47 experiment (IRSN Saclay). Computational results compare favorably with experimental data, particularly for the Helium and steam volume fractions.

  9. A multi-model ensemble approach to seabed mapping

    Science.gov (United States)

    Diesing, Markus; Stephens, David

    2015-06-01

    Seabed habitat mapping based on swath acoustic data and ground-truth samples is an emergent and active marine science discipline. Significant progress could be achieved by transferring techniques and approaches that have been successfully developed and employed in such fields as terrestrial land cover mapping. One such promising approach is the multiple classifier system, which aims at improving classification performance by combining the outputs of several classifiers. Here we present results of a multi-model ensemble applied to multibeam acoustic data covering more than 5000 km2 of seabed in the North Sea with the aim to derive accurate spatial predictions of seabed substrate. A suite of six machine learning classifiers (k-Nearest Neighbour, Support Vector Machine, Classification Tree, Random Forest, Neural Network and Naïve Bayes) was trained with ground-truth sample data classified into seabed substrate classes and their prediction accuracy was assessed with an independent set of samples. The three and five best performing models were combined to classifier ensembles. Both ensembles led to increased prediction accuracy as compared to the best performing single classifier. The improvements were however not statistically significant at the 5% level. Although the three-model ensemble did not perform significantly better than its individual component models, we noticed that the five-model ensemble did perform significantly better than three of the five component models. A classifier ensemble might therefore be an effective strategy to improve classification performance. Another advantage is the fact that the agreement in predicted substrate class between the individual models of the ensemble could be used as a measure of confidence. We propose a simple and spatially explicit measure of confidence that is based on model agreement and prediction accuracy.

  10. Modeling AEC—New Approaches to Study Rare Genetic Disorders

    Science.gov (United States)

    Koch, Peter J.; Dinella, Jason; Fete, Mary; Siegfried, Elaine C.; Koster, Maranke I.

    2015-01-01

    Ankyloblepharon-ectodermal defects-cleft lip/palate (AEC) syndrome is a rare monogenetic disorder that is characterized by severe abnormalities in ectoderm-derived tissues, such as skin and its appendages. A major cause of morbidity among affected infants is severe and chronic skin erosions. Currently, supportive care is the only available treatment option for AEC patients. Mutations in TP63, a gene that encodes key regulators of epidermal development, are the genetic cause of AEC. However, it is currently not clear how mutations in TP63 lead to the various defects seen in the patients’ skin. In this review, we will discuss current knowledge of the AEC disease mechanism obtained by studying patient tissue and genetically engineered mouse models designed to mimic aspects of the disorder. We will then focus on new approaches to model AEC, including the use of patient cells and stem cell technology to replicate the disease in a human tissue culture model. The latter approach will advance our understanding of the disease and will allow for the development of new in vitro systems to identify drugs for the treatment of skin erosions in AEC patients. Further, the use of stem cell technology, in particular induced pluripotent stem cells (iPSC), will enable researchers to develop new therapeutic approaches to treat the disease using the patient’s own cells (autologous keratinocyte transplantation) after correction of the disease-causing mutations. PMID:24665072

  11. Policy harmonized approach for the EU agricultural sector modelling

    Directory of Open Access Journals (Sweden)

    G. SALPUTRA

    2008-12-01

    Full Text Available Policy harmonized (PH approach allows for the quantitative assessment of the impact of various elements of EU CAP direct support schemes, where the production effects of direct payments are accounted through reaction prices formed by producer price and policy price add-ons. Using the AGMEMOD model the impacts of two possible EU agricultural policy scenarios upon beef production have been analysed – full decoupling with a switch from historical to regional Single Payment scheme or alternatively with re-distribution of country direct payment envelopes via introduction of EU-wide flat area payment. The PH approach, by systematizing and harmonizing the management and use of policy data, ensures that projected differential policy impacts arising from changes in common EU policies reflect the likely actual differential impact as opposed to differences in how “common” policies are implemented within analytical models. In the second section of the paper the AGMEMOD model’s structure is explained. The policy harmonized evaluation method is presented in the third section. Results from an application of the PH approach are presented and discussed in the paper’s penultimate section, while section 5 concludes.;

  12. Data and Dynamics Driven Approaches for Modelling and Forecasting the Red Sea Chlorophyll

    KAUST Repository

    Dreano, Denis

    2017-01-01

    concentration and have practical applications for fisheries operation and harmful algae blooms monitoring. Modelling approaches can be divided between physics- driven (dynamical) approaches, and data-driven (statistical) approaches. Dynamical models are based

  13. A Statistical Approach For Modeling Tropical Cyclones. Synthetic Hurricanes Generator Model

    Energy Technology Data Exchange (ETDEWEB)

    Pasqualini, Donatella [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-11

    This manuscript brie y describes a statistical ap- proach to generate synthetic tropical cyclone tracks to be used in risk evaluations. The Synthetic Hur- ricane Generator (SynHurG) model allows model- ing hurricane risk in the United States supporting decision makers and implementations of adaptation strategies to extreme weather. In the literature there are mainly two approaches to model hurricane hazard for risk prediction: deterministic-statistical approaches, where the storm key physical parameters are calculated using physi- cal complex climate models and the tracks are usually determined statistically from historical data; and sta- tistical approaches, where both variables and tracks are estimated stochastically using historical records. SynHurG falls in the second category adopting a pure stochastic approach.

  14. Object-Oriented Approach to Modeling Units of Pneumatic Systems

    Directory of Open Access Journals (Sweden)

    Yu. V. Kyurdzhiev

    2014-01-01

    Full Text Available The article shows the relevance of the approaches to the object-oriented programming when modeling the pneumatic units (PU.Based on the analysis of the calculation schemes of aggregates pneumatic systems two basic objects, namely a cavity flow and a material point were highlighted.Basic interactions of objects are defined. Cavity-cavity interaction: ex-change of matter and energy with the flows of mass. Cavity-point interaction: force interaction, exchange of energy in the form of operation. Point-point in-teraction: force interaction, elastic interaction, inelastic interaction, and inter-vals of displacement.The authors have developed mathematical models of basic objects and interactions. Models and interaction of elements are implemented in the object-oriented programming.Mathematical models of elements of PU design scheme are implemented in derived from the base class. These classes implement the models of flow cavity, piston, diaphragm, short channel, diaphragm to be open by a given law, spring, bellows, elastic collision, inelastic collision, friction, PU stages with a limited movement, etc.A numerical integration of differential equations for the mathematical models of PU design scheme elements is based on the Runge-Kutta method of the fourth order. On request each class performs a tact of integration i.e. calcu-lation of the coefficient method.The paper presents an integration algorithm of the system of differential equations. All objects of the PU design scheme are placed in a unidirectional class list. Iterator loop cycle initiates the integration tact of all the objects in the list. One in four iteration makes a transition to the next step of integration. Calculation process stops when any object shows a shutdowns flag.The proposed approach was tested in the calculation of a number of PU designs. With regard to traditional approaches to modeling, the authors-proposed method features in easy enhancement, code reuse, high reliability

  15. Modeling drug- and chemical- induced hepatotoxicity with systems biology approaches

    Directory of Open Access Journals (Sweden)

    Sudin eBhattacharya

    2012-12-01

    Full Text Available We provide an overview of computational systems biology approaches as applied to the study of chemical- and drug-induced toxicity. The concept of ‘toxicity pathways’ is described in the context of the 2007 US National Academies of Science report, Toxicity testing in the 21st Century: A Vision and A Strategy. Pathway mapping and modeling based on network biology concepts are a key component of the vision laid out in this report for a more biologically-based analysis of dose-response behavior and the safety of chemicals and drugs. We focus on toxicity of the liver (hepatotoxicity – a complex phenotypic response with contributions from a number of different cell types and biological processes. We describe three case studies of complementary multi-scale computational modeling approaches to understand perturbation of toxicity pathways in the human liver as a result of exposure to environmental contaminants and specific drugs. One approach involves development of a spatial, multicellular virtual tissue model of the liver lobule that combines molecular circuits in individual hepatocytes with cell-cell interactions and blood-mediated transport of toxicants through hepatic sinusoids, to enable quantitative, mechanistic prediction of hepatic dose-response for activation of the AhR toxicity pathway. Simultaneously, methods are being developing to extract quantitative maps of intracellular signaling and transcriptional regulatory networks perturbed by environmental contaminants, using a combination of gene expression and genome-wide protein-DNA interaction data. A predictive physiological model (DILIsymTM to understand drug-induced liver injury (DILI, the most common adverse event leading to termination of clinical development programs and regulatory actions on drugs, is also described. The model initially focuses on reactive metabolite-induced DILI in response to administration of acetaminophen, and spans multiple biological scales.

  16. Multiscale modeling of alloy solidification using a database approach

    Science.gov (United States)

    Tan, Lijian; Zabaras, Nicholas

    2007-11-01

    A two-scale model based on a database approach is presented to investigate alloy solidification. Appropriate assumptions are introduced to describe the behavior of macroscopic temperature, macroscopic concentration, liquid volume fraction and microstructure features. These assumptions lead to a macroscale model with two unknown functions: liquid volume fraction and microstructure features. These functions are computed using information from microscale solutions of selected problems. This work addresses the selection of sample problems relevant to the interested problem and the utilization of data from the microscale solution of the selected sample problems. A computationally efficient model, which is different from the microscale and macroscale models, is utilized to find relevant sample problems. In this work, the computationally efficient model is a sharp interface solidification model of a pure material. Similarities between the sample problems and the problem of interest are explored by assuming that the liquid volume fraction and microstructure features are functions of solution features extracted from the solution of the computationally efficient model. The solution features of the computationally efficient model are selected as the interface velocity and thermal gradient in the liquid at the time the sharp solid-liquid interface passes through. An analytical solution of the computationally efficient model is utilized to select sample problems relevant to solution features obtained at any location of the domain of the problem of interest. The microscale solution of selected sample problems is then utilized to evaluate the two unknown functions (liquid volume fraction and microstructure features) in the macroscale model. The temperature solution of the macroscale model is further used to improve the estimation of the liquid volume fraction and microstructure features. Interpolation is utilized in the feature space to greatly reduce the number of required

  17. Overview of the FEP analysis approach to model development

    International Nuclear Information System (INIS)

    Bailey, L.

    1998-01-01

    This report heads a suite of documents describing the Nirex model development programme. The programme is designed to provide a clear audit trail from the identification of significant features, events and processes (FEPs) to the models and modelling processes employed within a detailed safety assessment. A five stage approach has been adopted, which provides a systematic framework for addressing uncertainty and for the documentation of all modelling decisions and assumptions. The five stages are as follows: Stage 1: EP Analysis - compilation and structuring of a FEP database; Stage 2: Scenario and Conceptual Model Development; Stage 3: Mathematical Model Development; Stage 4: Software Development; Stage 5: confidence Building. This report describes the development and structuring of a FEP database as a Master Directed Diagram (MDD) and explains how this may be used to identify different modelling scenarios, based upon the identification of scenario -defining FEPs. The methodology describes how the possible evolution of a repository system can be addressed in terms of a base scenario, a broad and reasonable representation of the 'natural' evolution of the system, and a number of variant scenarios, representing the effects of probabilistic events and processes. The MDD has been used to identify conceptual models to represent the base scenario and the interactions between these conceptual models have been systematically reviewed using a matrix diagram technique. This has led to the identification of modelling requirements for the base scenario, against which existing assessment software capabilities have been reviewed. A mechanism for combining probabilistic scenario-defining FEPs to construct multi-FEP variant scenarios has been proposed and trialled using the concept of a 'timeline', a defined sequence of events, from which consequences can be assessed. An iterative approach, based on conservative modelling principles, has been proposed for the evaluation of

  18. A multi-model approach to X-ray pulsars

    Directory of Open Access Journals (Sweden)

    Schönherr G.

    2014-01-01

    Full Text Available The emission characteristics of X-ray pulsars are governed by magnetospheric accretion within the Alfvén radius, leading to a direct coupling of accretion column properties and interactions at the magnetosphere. The complexity of the physical processes governing the formation of radiation within the accreted, strongly magnetized plasma has led to several sophisticated theoretical modelling efforts over the last decade, dedicated to either the formation of the broad band continuum, the formation of cyclotron resonance scattering features (CRSFs or the formation of pulse profiles. While these individual approaches are powerful in themselves, they quickly reach their limits when aiming at a quantitative comparison to observational data. Too many fundamental parameters, describing the formation of the accretion columns and the systems’ overall geometry are unconstrained and different models are often based on different fundamental assumptions, while everything is intertwined in the observed, highly phase-dependent spectra and energy-dependent pulse profiles. To name just one example: the (phase variable line width of the CRSFs is highly dependent on the plasma temperature, the existence of B-field gradients (geometry and observation angle, parameters which, in turn, drive the continuum radiation and are driven by the overall two-pole geometry for the light bending model respectively. This renders a parallel assessment of all available spectral and timing information by a compatible across-models-approach indispensable. In a collaboration of theoreticians and observers, we have been working on a model unification project over the last years, bringing together theoretical calculations of the Comptonized continuum, Monte Carlo simulations and Radiation Transfer calculations of CRSFs as well as a General Relativity (GR light bending model for ray tracing of the incident emission pattern from both magnetic poles. The ultimate goal is to implement a

  19. A DYNAMICAL SYSTEM APPROACH IN MODELING TECHNOLOGY TRANSFER

    Directory of Open Access Journals (Sweden)

    Hennie Husniah

    2016-05-01

    Full Text Available In this paper we discuss a mathematical model of two parties technology transfer from a leader to a follower. The model is reconstructed via dynamical system approach from a known standard Raz and Assa model and we found some important conclusion which have not been discussed in the original model. The model assumes that in the absence of technology transfer from a leader to a follower, both the leader and the follower have a capability to grow independently with a known upper limit of the development. We obtain a rich mathematical structure of the steady state solution of the model. We discuss a special situation in which the upper limit of the technological development of the follower is higher than that of the leader, but the leader has started earlier than the follower in implementing the technology. In this case we show a paradox stating that the follower is unable to reach its original upper limit of the technological development could appear whenever the transfer rate is sufficiently high.  We propose a new model to increase realism so that any technological transfer rate could only has a positive effect in accelerating the rate of growth of the follower in reaching its original upper limit of the development.

  20. Predicting the emission from an incineration plant - a modelling approach

    International Nuclear Information System (INIS)

    Rohyiza Baan

    2004-01-01

    The emissions from combustion process of Municipal Solid Waste (MSW) have become an important issue in incineration technology. Resulting from unstable combustion conditions, the formation of undesirable compounds such as CO, SO 2 , NO x , PM 10 and dioxin become the source of pollution concentration in the atmosphere. The impact of emissions on criteria air pollutant concentrations could be obtained directly using ambient air monitoring equipment or predicted using dispersion modelling. Literature shows that the complicated atmospheric processes that occur in nature can be described using mathematical models. This paper will highlight the air dispersion model as a tool to relate and simulate the release and dispersion of air pollutants in the atmosphere. The technique is based on a programming approach to develop the air dispersion ground level concentration model with the use of Gaussian and Pasquil equation. This model is useful to study the consequences of various sources of air pollutant and estimating the amount of pollutants released into the air from existing emission sources. From this model, it was found that the difference in percentage of data between actual conditions and the model's prediction is about 5%. (Author)

  1. Modeling human diseases: an education in interactions and interdisciplinary approaches

    Directory of Open Access Journals (Sweden)

    Leonard Zon

    2016-06-01

    Full Text Available Traditionally, most investigators in the biomedical arena exploit one model system in the course of their careers. Occasionally, an investigator will switch models. The selection of a suitable model system is a crucial step in research design. Factors to consider include the accuracy of the model as a reflection of the human disease under investigation, the numbers of animals needed and ease of husbandry, its physiology and developmental biology, and the ability to apply genetics and harness the model for drug discovery. In my lab, we have primarily used the zebrafish but combined it with other animal models and provided a framework for others to consider the application of developmental biology for therapeutic discovery. Our interdisciplinary approach has led to many insights into human diseases and to the advancement of candidate drugs to clinical trials. Here, I draw on my experiences to highlight the importance of combining multiple models, establishing infrastructure and genetic tools, forming collaborations, and interfacing with the medical community for successful translation of basic findings to the clinic.

  2. A novel approach to multihazard modeling and simulation.

    Science.gov (United States)

    Smith, Silas W; Portelli, Ian; Narzisi, Giuseppe; Nelson, Lewis S; Menges, Fabian; Rekow, E Dianne; Mincer, Joshua S; Mishra, Bhubaneswar; Goldfrank, Lewis R

    2009-06-01

    To develop and apply a novel modeling approach to support medical and public health disaster planning and response using a sarin release scenario in a metropolitan environment. An agent-based disaster simulation model was developed incorporating the principles of dose response, surge response, and psychosocial characteristics superimposed on topographically accurate geographic information system architecture. The modeling scenarios involved passive and active releases of sarin in multiple transportation hubs in a metropolitan city. Parameters evaluated included emergency medical services, hospital surge capacity (including implementation of disaster plan), and behavioral and psychosocial characteristics of the victims. In passive sarin release scenarios of 5 to 15 L, mortality increased nonlinearly from 0.13% to 8.69%, reaching 55.4% with active dispersion, reflecting higher initial doses. Cumulative mortality rates from releases in 1 to 3 major transportation hubs similarly increased nonlinearly as a function of dose and systemic stress. The increase in mortality rate was most pronounced in the 80% to 100% emergency department occupancy range, analogous to the previously observed queuing phenomenon. Effective implementation of hospital disaster plans decreased mortality and injury severity. Decreasing ambulance response time and increasing available responding units reduced mortality among potentially salvageable patients. Adverse psychosocial characteristics (excess worry and low compliance) increased demands on health care resources. Transfer to alternative urban sites was possible. An agent-based modeling approach provides a mechanism to assess complex individual and systemwide effects in rare events.

  3. Social Network Analysis and Nutritional Behavior: An Integrated Modeling Approach.

    Science.gov (United States)

    Senior, Alistair M; Lihoreau, Mathieu; Buhl, Jerome; Raubenheimer, David; Simpson, Stephen J

    2016-01-01

    Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent research combining state-space models of nutritional geometry with agent-based models (ABMs), show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit ABMs that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition). Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interactions in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.

  4. A comprehensive approach to age-dependent dosimetric modeling

    International Nuclear Information System (INIS)

    Leggett, R.W.; Cristy, M.; Eckerman, K.F.

    1986-01-01

    In the absence of age-specific biokinetic models, current retention models of the International Commission on Radiological Protection (ICRP) frequently are used as a point of departure for evaluation of exposures to the general population. These models were designed and intended for estimation of long-term integrated doses to the adult worker. Their format and empirical basis preclude incorporation of much valuable physiological information and physiologically reasonable assumptions that could be used in characterizing the age-specific behavior of radioelements in humans. In this paper we discuss a comprehensive approach to age-dependent dosimetric modeling in which consideration is given not only to changes with age in masses and relative geometries of body organs and tissues but also to best available physiological and radiobiological information relating to the age-specific biobehavior of radionuclides. This approach is useful in obtaining more accurate estimates of long-term dose commitments as a function of age at intake, but it may be particularly valuable in establishing more accurate estimates of dose rate as a function of age. Age-specific dose rates are needed for a proper analysis of the potential effects on estimates or risk of elevated dose rates per unit intake in certain stages of life, elevated response per unit dose received during some stages of life, and age-specific non-radiogenic competing risks

  5. A comprehensive approach to age-dependent dosimetric modeling

    International Nuclear Information System (INIS)

    Leggett, R.W.; Cristy, M.; Eckerman, K.F.

    1987-01-01

    In the absence of age-specific biokinetic models, current retention models of the International Commission of Radiological Protection (ICRP) frequently are used as a point of departure for evaluation of exposures to the general population. These models were designed and intended for estimation of long-term integrated doses to the adult worker. Their format and empirical basis preclude incorporation of much valuable physiological information and physiologically reasonable assumptions that could be used in characterizing the age-specific behavior of radioelements in humans. In this paper a comprehensive approach to age-dependent dosimetric modeling is discussed in which consideration is given not only to changes with age in masses and relative geometries of body organs and tissues but also to best available physiological and radiobiological information relating to the age-specific biobehavior of radionuclides. This approach is useful in obtaining more accurate estimates of long-term dose commitments as a function of age at intake, but it may be particularly valuable in establishing more accurate estimates of dose rate as a function of age. Age-specific dose rates are needed for a proper analysis of the potential effects on estimates of risk of elevated dose rates per unit intake in certain stages of life, elevated response per unit dose received during some stages of life, and age-specific non-radiogenic competing risks. 16 refs.; 3 figs.; 1 table

  6. Micromechanical modeling and inverse identification of damage using cohesive approaches

    International Nuclear Information System (INIS)

    Blal, Nawfal

    2013-01-01

    In this study a micromechanical model is proposed for a collection of cohesive zone models embedded between two each elements of a standard cohesive-volumetric finite element method. An equivalent 'matrix-inclusions' composite is proposed as a representation of the cohesive-volumetric discretization. The overall behaviour is obtained using homogenization approaches (Hashin Shtrikman scheme and the P. Ponte Castaneda approach). The derived model deals with elastic, brittle and ductile materials. It is available whatever the triaxiality loading rate and the shape of the cohesive law, and leads to direct relationships between the overall material properties and the local cohesive parameters and the mesh density. First, rigorous bounds on the normal and tangential cohesive stiffnesses are obtained leading to a suitable control of the inherent artificial elastic loss induced by intrinsic cohesive models. Second, theoretical criteria on damageable and ductile cohesive parameters are established (cohesive peak stress, critical separation, cohesive failure energy,... ). These criteria allow a practical calibration of the cohesive zone parameters as function of the overall material properties and the mesh length. The main interest of such calibration is its promising capacity to lead to a mesh-insensitive overall response in surface damage. (author) [fr

  7. A piecewise modeling approach for climate sensitivity studies: Tests with a shallow-water model

    Science.gov (United States)

    Shao, Aimei; Qiu, Chongjian; Niu, Guo-Yue

    2015-10-01

    In model-based climate sensitivity studies, model errors may grow during continuous long-term integrations in both the "reference" and "perturbed" states and hence the climate sensitivity (defined as the difference between the two states). To reduce the errors, we propose a piecewise modeling approach that splits the continuous long-term simulation into subintervals of sequential short-term simulations, and updates the modeled states through re-initialization at the end of each subinterval. In the re-initialization processes, this approach updates the reference state with analysis data and updates the perturbed states with the sum of analysis data and the difference between the perturbed and the reference states, thereby improving the credibility of the modeled climate sensitivity. We conducted a series of experiments with a shallow-water model to evaluate the advantages of the piecewise approach over the conventional continuous modeling approach. We then investigated the impacts of analysis data error and subinterval length used in the piecewise approach on the simulations of the reference and perturbed states as well as the resulting climate sensitivity. The experiments show that the piecewise approach reduces the errors produced by the conventional continuous modeling approach, more effectively when the analysis data error becomes smaller and the subinterval length is shorter. In addition, we employed a nudging assimilation technique to solve possible spin-up problems caused by re-initializations by using analysis data that contain inconsistent errors between mass and velocity. The nudging technique can effectively diminish the spin-up problem, resulting in a higher modeling skill.

  8. Numerical modelling of carbonate platforms and reefs: approaches and opportunities

    Energy Technology Data Exchange (ETDEWEB)

    Dalmasso, H.; Montaggioni, L.F.; Floquet, M. [Universite de Provence, Marseille (France). Centre de Sedimentologie-Palaeontologie; Bosence, D. [Royal Holloway University of London, Egham (United Kingdom). Dept. of Geology

    2001-07-01

    This paper compares different computing procedures that have been utilized in simulating shallow-water carbonate platform development. Based on our geological knowledge we can usually give a rather accurate qualitative description of the mechanisms controlling geological phenomena. Further description requires the use of computer stratigraphic simulation models that allow quantitative evaluation and understanding of the complex interactions of sedimentary depositional carbonate systems. The roles of modelling include: (1) encouraging accuracy and precision in data collection and process interpretation (Watney et al., 1999); (2) providing a means to quantitatively test interpretations concerning the control of various mechanisms on producing sedimentary packages; (3) predicting or extrapolating results into areas of limited control; (4) gaining new insights regarding the interaction of parameters; (5) helping focus on future studies to resolve specific problems. This paper addresses two main questions, namely: (1) What are the advantages and disadvantages of various types of models? (2) How well do models perform? In this paper we compare and discuss the application of five numerical models: CARBONATE (Bosence and Waltham, 1990), FUZZIM (Nordlund, 1999), CARBPLAT (Bosscher, 1992), DYNACARB (Li et al., 1993), PHIL (Bowman, 1997) and SEDPAK (Kendall et al., 1991). The comparison, testing and evaluation of these models allow one to gain a better knowledge and understanding of controlling parameters of carbonate platform development, which are necessary for modelling. Evaluating numerical models, critically comparing results from models using different approaches, and pushing experimental tests to their limits, provide an effective vehicle to improve and develop new numerical models. A main feature of this paper is to closely compare the performance between two numerical models: a forward model (CARBONATE) and a fuzzy logic model (FUZZIM). These two models use common

  9. Modeling the cometary environment using a fluid approach

    Science.gov (United States)

    Shou, Yinsi

    Comets are believed to have preserved the building material of the early solar system and to hold clues to the origin of life on Earth. Abundant remote observations of comets by telescopes and the in-situ measurements by a handful of space missions reveal that the cometary environments are complicated by various physical and chemical processes among the neutral gases and dust grains released from comets, cometary ions, and the solar wind in the interplanetary space. Therefore, physics-based numerical models are in demand to interpret the observational data and to deepen our understanding of the cometary environment. In this thesis, three models using a fluid approach, which include important physical and chemical processes underlying the cometary environment, have been developed to study the plasma, neutral gas, and the dust grains, respectively. Although models based on the fluid approach have limitations in capturing all of the correct physics for certain applications, especially for very low gas density environment, they are computationally much more efficient than alternatives. In the simulations of comet 67P/Churyumov-Gerasimenko at various heliocentric distances with a wide range of production rates, our multi-fluid cometary neutral gas model and multi-fluid cometary dust model have achieved comparable results to the Direct Simulation Monte Carlo (DSMC) model, which is based on a kinetic approach that is valid in all collisional regimes. Therefore, our model is a powerful alternative to the particle-based model, especially for some computationally intensive simulations. Capable of accounting for the varying heating efficiency under various physical conditions in a self-consistent way, the multi-fluid cometary neutral gas model is a good tool to study the dynamics of the cometary coma with different production rates and heliocentric distances. The modeled H2O expansion speeds reproduce the general trend and the speed's nonlinear dependencies of production rate

  10. Statistical approach for uncertainty quantification of experimental modal model parameters

    DEFF Research Database (Denmark)

    Luczak, M.; Peeters, B.; Kahsin, M.

    2014-01-01

    Composite materials are widely used in manufacture of aerospace and wind energy structural components. These load carrying structures are subjected to dynamic time-varying loading conditions. Robust structural dynamics identification procedure impose tight constraints on the quality of modal models...... represent different complexity levels ranging from coupon, through sub-component up to fully assembled aerospace and wind energy structural components made of composite materials. The proposed method is demonstrated on two application cases of a small and large wind turbine blade........ This paper aims at a systematic approach for uncertainty quantification of the parameters of the modal models estimated from experimentally obtained data. Statistical analysis of modal parameters is implemented to derive an assessment of the entire modal model uncertainty measure. Investigated structures...

  11. Formal approach to modeling of modern Information Systems

    Directory of Open Access Journals (Sweden)

    Bálint Molnár

    2016-01-01

    Full Text Available Most recently, the concept of business documents has started to play double role. On one hand, a business document (word processing text or calculation sheet can be used as specification tool, on the other hand the business document is an immanent constituent of business processes, thereby essential component of business Information Systems. The recent tendency is that the majority of documents and their contents within business Information Systems remain in semi-structured format and a lesser part of documents is transformed into schemas of structured databases. In order to keep the emerging situation in hand, we suggest the creation (1 a theoretical framework for modeling business Information Systems; (2 and a design method for practical application based on the theoretical model that provides the structuring principles. The modeling approach that focuses on documents and their interrelationships with business processes assists in perceiving the activities of modern Information Systems.

  12. A distributed approach for parameters estimation in System Biology models

    International Nuclear Information System (INIS)

    Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.

    2009-01-01

    Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.

  13. International energy market dynamics: a modelling approach. Tome 1

    International Nuclear Information System (INIS)

    Nachet, S.

    1996-01-01

    This work is an attempt to model international energy market and reproduce the behaviour of both energy demand and supply. Energy demand was represented using sector versus source approach. For developing countries, existing link between economic and energy sectors were analysed. Energy supply is exogenous for energy sources other than oil and natural gas. For hydrocarbons, exploration-production process was modelled and produced figures as production yield, exploration effort index, etc. The model built is econometric and is solved using a software that was constructed for this purpose. We explore the energy market future using three scenarios and obtain projections by 2010 for energy demand per source and oil natural gas supply per region. Economic variables are used to produce different indicators as energy intensity, energy per capita, etc. (author). 378 refs., 26 figs., 35 tabs., 11 appends

  14. International energy market dynamics: a modelling approach. Tome 2

    International Nuclear Information System (INIS)

    Nachet, S.

    1996-01-01

    This work is an attempt to model international energy market and reproduce the behaviour of both energy demand and supply. Energy demand was represented using sector versus source approach. For developing countries, existing link between economic and energy sectors were analysed. Energy supply is exogenous for energy sources other than oil and natural gas. For hydrocarbons, exploration-production process was modelled and produced figures as production yield, exploration effort index, ect. The model build is econometric and is solved using a software that was constructed for this purpose. We explore the energy market future using three scenarios and obtain projections by 2010 for energy demand per source and oil and natural gas supply per region. Economic variables are used to produce different indicators as energy intensity, energy per capita, etc. (author). 378 refs., 26 figs., 35 tabs., 11 appends

  15. Electrochemo-hydrodynamics modeling approach for a uranium electrowinning cell

    Energy Technology Data Exchange (ETDEWEB)

    Kim, K.R.; Paek, S.; Ahn, D.H., E-mail: krkim1@kaeri.re.kr, E-mail: swpaek@kaeri.re.kr, E-mail: dhahn2@kaeri.re.kr [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Park, J.Y.; Hwang, I.S., E-mail: d486916@snu.ac.kr, E-mail: hisline@snu.ac.kr [Department of Nuclear Engineering, Seoul National University (Korea, Republic of)

    2011-07-01

    This study demonstrated a simulation based on fully coupling of electrochemical kinetics with 3- dimensional transport of ionic species in a flowing molten-salt electrolyte through a simplified channel cell of uranium electro winner. Dependences of ionic electro-transport on the velocity of stationary electrolyte flow were studied using a coupling approach of electrochemical reaction model. The present model was implemented in a commercially available computational fluid dynamics (CFD) platform, Ansys-CFX, using its customization ability via user defined functions. The main parameters characterizing the effect of the turbulent flow of an electrolyte between two planar electrodes were demonstrated by means of CFD-based multiphysics simulation approach. Simulation was carried out for the case of uranium electrowinning characteristics in a stream of molten salt electrolyte. This approach was taken into account the concentration profile at the electrode surface, to represent the variation of the diffusion limited current density as a function of the flow characteristics and of applied current density. It was able to predict conventional current voltage relation in addition to details of electrolyte fluid dynamics and electrochemical variable, such as flow field, species concentrations, potential, and current distributions throughout the current driven cell. (author)

  16. Electrochemo-hydrodynamics modeling approach for a uranium electrowinning cell

    International Nuclear Information System (INIS)

    Kim, K.R.; Paek, S.; Ahn, D.H.; Park, J.Y.; Hwang, I.S.

    2011-01-01

    This study demonstrated a simulation based on fully coupling of electrochemical kinetics with 3- dimensional transport of ionic species in a flowing molten-salt electrolyte through a simplified channel cell of uranium electro winner. Dependences of ionic electro-transport on the velocity of stationary electrolyte flow were studied using a coupling approach of electrochemical reaction model. The present model was implemented in a commercially available computational fluid dynamics (CFD) platform, Ansys-CFX, using its customization ability via user defined functions. The main parameters characterizing the effect of the turbulent flow of an electrolyte between two planar electrodes were demonstrated by means of CFD-based multiphysics simulation approach. Simulation was carried out for the case of uranium electrowinning characteristics in a stream of molten salt electrolyte. This approach was taken into account the concentration profile at the electrode surface, to represent the variation of the diffusion limited current density as a function of the flow characteristics and of applied current density. It was able to predict conventional current voltage relation in addition to details of electrolyte fluid dynamics and electrochemical variable, such as flow field, species concentrations, potential, and current distributions throughout the current driven cell. (author)

  17. New business models for electric cars-A holistic approach

    International Nuclear Information System (INIS)

    Kley, Fabian; Lerch, Christian; Dallinger, David

    2011-01-01

    Climate change and global resource shortages have led to rethinking traditional individual mobility services based on combustion engines. As the consequence of technological improvements, the first electric vehicles are now being introduced and greater market penetration can be expected. But any wider implementation of battery-powered electrical propulsion systems in the future will give rise to new challenges for both the traditional automotive industry and other new players, e.g. battery manufacturers, the power supply industry and other service providers. Different application cases of electric vehicles are currently being discussed which means that numerous business models could emerge, leading to new shares in value creation and involving new players. Consequently, individual stakeholders are uncertain about which business models are really effective with regard to targeting a profitable overall concept. Therefore, this paper aims to define a holistic approach to developing business models for electric mobility, which analyzes the system as a whole on the one hand and provides decision support for affected enterprises on the other. To do so, the basic elements of electric mobility are considered and topical approaches to business models for various stakeholders are discussed. The paper concludes by presenting a systemic instrument for business models based on morphological methods. - Highlights: → We present a systemic instrument to analyze business models for electric vehicles. → Provide decision support for an enterprises dealing with electric vehicle innovations. → Combine business aspects of the triad between vehicles concepts, infrastructure as well as system integration. → In the market, activities in all domains have been initiated, but often with undefined or unclear structures.

  18. A probabilistic approach to the drag-based model

    Science.gov (United States)

    Napoletano, Gianluca; Forte, Roberta; Moro, Dario Del; Pietropaolo, Ermanno; Giovannelli, Luca; Berrilli, Francesco

    2018-02-01

    The forecast of the time of arrival (ToA) of a coronal mass ejection (CME) to Earth is of critical importance for our high-technology society and for any future manned exploration of the Solar System. As critical as the forecast accuracy is the knowledge of its precision, i.e. the error associated to the estimate. We propose a statistical approach for the computation of the ToA using the drag-based model by introducing the probability distributions, rather than exact values, as input parameters, thus allowing the evaluation of the uncertainty on the forecast. We test this approach using a set of CMEs whose transit times are known, and obtain extremely promising results: the average value of the absolute differences between measure and forecast is 9.1h, and half of these residuals are within the estimated errors. These results suggest that this approach deserves further investigation. We are working to realize a real-time implementation which ingests the outputs of automated CME tracking algorithms as inputs to create a database of events useful for a further validation of the approach.

  19. Leader communication approaches and patient safety: An integrated model.

    Science.gov (United States)

    Mattson, Malin; Hellgren, Johnny; Göransson, Sara

    2015-06-01

    Leader communication is known to influence a number of employee behaviors. When it comes to the relationship between leader communication and safety, the evidence is more scarce and ambiguous. The aim of the present study is to investigate whether and in what way leader communication relates to safety outcomes. The study examines two leader communication approaches: leader safety priority communication and feedback to subordinates. These approaches were assumed to affect safety outcomes via different employee behaviors. Questionnaire data, collected from 221 employees at two hospital wards, were analyzed using structural equation modeling. The two examined communication approaches were both positively related to safety outcomes, although leader safety priority communication was mediated by employee compliance and feedback communication by organizational citizenship behaviors. The findings suggest that leader communication plays a vital role in improving organizational and patient safety and that different communication approaches seem to positively affect different but equally essential employee safety behaviors. The results highlights the necessity for leaders to engage in one-way communication of safety values as well as in more relational feedback communication with their subordinates in order to enhance patient safety. Copyright © 2015 Elsevier Ltd. and National Safety Council. Published by Elsevier Ltd. All rights reserved.

  20. Magnetic field approaches in dc thermal plasma modelling

    International Nuclear Information System (INIS)

    Freton, P; Gonzalez, J J; Masquere, M; Reichert, Frank

    2011-01-01

    The self-induced magnetic field has an important role in thermal plasma configurations generated by electric arcs as it generates velocity through Lorentz forces. In the models a good representation of the magnetic field is thus necessary. Several approaches exist to calculate the self-induced magnetic field such as the Maxwell-Ampere formulation, the vector potential approach combined with different kinds of boundary conditions or the Biot and Savart (B and S) formulation. The calculation of the self-induced magnetic field is alone a difficult problem and only few papers of the thermal plasma community speak on this subject. In this study different approaches with different boundary conditions are applied on two geometries to compare the methods and their limitations. The calculation time is also one of the criteria for the choice of the method and a compromise must be found between method precision and computation time. The study shows the importance of the current carrying path representation in the electrode on the deduced magnetic field. The best compromise consists of using the B and S formulation on the walls and/or edges of the calculation domain to determine the boundary conditions and to solve the vector potential in a 2D system. This approach provides results identical to those obtained using the B and S formulation over the entire domain but with a considerable decrease in calculation time.

  1. A configurable component-based software system for magnetic field measurements

    Energy Technology Data Exchange (ETDEWEB)

    Nogiec, J.M.; DiMarco, J.; Kotelnikov, S.; Trombly-Freytag, K.; Walbridge, D.; Tartaglia, M.; /Fermilab

    2005-09-01

    A new software system to test accelerator magnets has been developed at Fermilab. The magnetic measurement technique involved employs a single stretched wire to measure alignment parameters and magnetic field strength. The software for the system is built on top of a flexible component-based framework, which allows for easy reconfiguration and runtime modification. Various user interface, data acquisition, analysis, and data persistence components can be configured to form different measurement systems that are tailored to specific requirements (e.g., involving magnet type or test stand). The system can also be configured with various measurement sequences or tests, each of them controlled by a dedicated script. It is capable of working interactively as well as executing a preselected sequence of tests. Each test can be parameterized to fit the specific magnet type or test stand requirements. The system has been designed with portability in mind and is capable of working on various platforms, such as Linux, Solaris, and Windows. It can be configured to use a local data acquisition subsystem or a remote data acquisition computer, such as a VME processor running VxWorks. All hardware-oriented components have been developed with a simulation option that allows for running and testing measurements in the absence of data acquisition hardware.

  2. Selection of independent components based on cortical mapping of electromagnetic activity

    Science.gov (United States)

    Chan, Hui-Ling; Chen, Yong-Sheng; Chen, Li-Fen

    2012-10-01

    Independent component analysis (ICA) has been widely used to attenuate interference caused by noise components from the electromagnetic recordings of brain activity. However, the scalp topographies and associated temporal waveforms provided by ICA may be insufficient to distinguish functional components from artifactual ones. In this work, we proposed two component selection methods, both of which first estimate the cortical distribution of the brain activity for each component, and then determine the functional components based on the parcellation of brain activity mapped onto the cortical surface. Among all independent components, the first method can identify the dominant components, which have strong activity in the selected dominant brain regions, whereas the second method can identify those inter-regional associating components, which have similar component spectra between a pair of regions. For a targeted region, its component spectrum enumerates the amplitudes of its parceled brain activity across all components. The selected functional components can be remixed to reconstruct the focused electromagnetic signals for further analysis, such as source estimation. Moreover, the inter-regional associating components can be used to estimate the functional brain network. The accuracy of the cortical activation estimation was evaluated on the data from simulation studies, whereas the usefulness and feasibility of the component selection methods were demonstrated on the magnetoencephalography data recorded from a gender discrimination study.

  3. Robust and Effective Component-based Banknote Recognition for the Blind.

    Science.gov (United States)

    Hasanuzzaman, Faiz M; Yang, Xiaodong; Tian, Yingli

    2012-11-01

    We develop a novel camera-based computer vision technology to automatically recognize banknotes for assisting visually impaired people. Our banknote recognition system is robust and effective with the following features: 1) high accuracy: high true recognition rate and low false recognition rate, 2) robustness: handles a variety of currency designs and bills in various conditions, 3) high efficiency: recognizes banknotes quickly, and 4) ease of use: helps blind users to aim the target for image capture. To make the system robust to a variety of conditions including occlusion, rotation, scaling, cluttered background, illumination change, viewpoint variation, and worn or wrinkled bills, we propose a component-based framework by using Speeded Up Robust Features (SURF). Furthermore, we employ the spatial relationship of matched SURF features to detect if there is a bill in the camera view. This process largely alleviates false recognition and can guide the user to correctly aim at the bill to be recognized. The robustness and generalizability of the proposed system is evaluated on a dataset including both positive images (with U.S. banknotes) and negative images (no U.S. banknotes) collected under a variety of conditions. The proposed algorithm, achieves 100% true recognition rate and 0% false recognition rate. Our banknote recognition system is also tested by blind users.

  4. Linear mixed-effects modeling approach to FMRI group analysis.

    Science.gov (United States)

    Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W

    2013-06-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity

  5. Parameter identification and global sensitivity analysis of Xin'anjiang model using meta-modeling approach

    Directory of Open Access Journals (Sweden)

    Xiao-meng Song

    2013-01-01

    Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters' sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.

  6. Model predictive control approach for a CPAP-device

    Directory of Open Access Journals (Sweden)

    Scheel Mathias

    2017-09-01

    Full Text Available The obstructive sleep apnoea syndrome (OSAS is characterized by a collapse of the upper respiratory tract, resulting in a reduction of the blood oxygen- and an increase of the carbon dioxide (CO2 - concentration, which causes repeated sleep disruptions. The gold standard to treat the OSAS is the continuous positive airway pressure (CPAP therapy. The continuous pressure keeps the upper airway open and prevents the collapse of the upper respiratory tract and the pharynx. Most of the available CPAP-devices cannot maintain the pressure reference [1]. In this work a model predictive control approach is provided. This control approach has the possibility to include the patient’s breathing effort into the calculation of the control variable. Therefore a patient-individualized control strategy can be developed.

  7. Optimizing nitrogen fertilizer use: Current approaches and simulation models

    International Nuclear Information System (INIS)

    Baethgen, W.E.

    2000-01-01

    Nitrogen (N) is the most common limiting nutrient in agricultural systems throughout the world. Crops need sufficient available N to achieve optimum yields and adequate grain-protein content. Consequently, sub-optimal rates of N fertilizers typically cause lower economical benefits for farmers. On the other hand, excessive N fertilizer use may result in environmental problems such as nitrate contamination of groundwater and emission of N 2 O and NO. In spite of the economical and environmental importance of good N fertilizer management, the development of optimum fertilizer recommendations is still a major challenge in most agricultural systems. This article reviews the approaches most commonly used for making N recommendations: expected yield level, soil testing and plant analysis (including quick tests). The paper introduces the application of simulation models that complement traditional approaches, and includes some examples of current applications in Africa and South America. (author)

  8. Modeling of problems of projection: A non-countercyclic approach

    Directory of Open Access Journals (Sweden)

    Jason Ginsburg

    2016-06-01

    Full Text Available This paper describes a computational implementation of the recent Problems of Projection (POP approach to the study of language (Chomsky 2013; 2015. While adopting the basic proposals of POP, notably with respect to how labeling occurs, we a attempt to formalize the basic proposals of POP, and b develop new proposals that overcome some problems with POP that arise with respect to cyclicity, labeling, and wh-movement operations. We show how this approach accounts for simple declarative sentences, ECM constructions, and constructions that involve long-distance movement of a wh-phrase (including the that-trace effect. We implemented these proposals with a computer model that automatically constructs step-by-step derivations of target sentences, thus making it possible to verify that these proposals work.

  9. Anomalous superconductivity in the tJ model; moment approach

    DEFF Research Database (Denmark)

    Sørensen, Mads Peter; Rodriguez-Nunez, J.J.

    1997-01-01

    By extending the moment approach of Nolting (Z, Phys, 225 (1972) 25) in the superconducting phase, we have constructed the one-particle spectral functions (diagonal and off-diagonal) for the tJ model in any dimensions. We propose that both the diagonal and the off-diagonal spectral functions...... Hartree shift which in the end result enlarges the bandwidth of the free carriers allowing us to take relative high values of J/t and allowing superconductivity to live in the T-c-rho phase diagram, in agreement with numerical calculations in a cluster, We have calculated the static spin susceptibility......, chi(T), and the specific heat, C-v(T), within the moment approach. We find that all the relevant physical quantities show the signature of superconductivity at T-c in the form of kinks (anomalous behavior) or jumps, for low density, in agreement with recent published literature, showing a generic...

  10. Geographic and temporal validity of prediction models: Different approaches were useful to examine model performance

    NARCIS (Netherlands)

    P.C. Austin (Peter); D. van Klaveren (David); Y. Vergouwe (Yvonne); D. Nieboer (Daan); D.S. Lee (Douglas); E.W. Steyerberg (Ewout)

    2016-01-01

    textabstractObjective: Validation of clinical prediction models traditionally refers to the assessment of model performance in new patients. We studied different approaches to geographic and temporal validation in the setting of multicenter data from two time periods. Study Design and Setting: We

  11. CM5: A pre-Swarm magnetic field model based upon the comprehensive modeling approach

    DEFF Research Database (Denmark)

    Sabaka, T.; Olsen, Nils; Tyler, Robert

    2014-01-01

    We have developed a model based upon the very successful Comprehensive Modeling (CM) approach using recent CHAMP, Ørsted, SAC-C and observatory hourly-means data from September 2000 to the end of 2013. This CM, called CM5, was derived from the algorithm that will provide a consistent line of Leve...

  12. Axial turbomachine modelling with a 1D axisymmetric approach

    International Nuclear Information System (INIS)

    Tauveron, Nicolas; Saez, Manuel; Ferrand, Pascal; Leboeuf, Francis

    2007-01-01

    This work concerns the design and safety analysis of direct cycle gas cooled reactor. The estimation of compressor and turbine performances in transient operations is of high importance for the designer. The first goal of this study is to provide a description of compressor behaviour in unstable conditions with a better understanding than the models based on performance maps ('traditional' 0D approach). A supplementary objective is to provide a coherent description of the turbine behaviour. The turbomachine modelling approach consists in the solution of 1D axisymmetric Navier-Stokes equations on an axial grid inside the turbomachine: mass, axial momentum, circumferential momentum and total-enthalpy balances are written. Blade forces are taken into account by using compressor or turbine blade cascade steady correlations. A particular effort has been developed to generate or test correlations in low mass flow and negative mass flow regimes, based on experimental data. The model is tested on open literature cases of the gas turbine aircraft community. For compressor and turbine, steady situations are fairly described, especially for medium and high mass flow rate. The dynamic behaviour of compressor is also quite well described, even in unstable operation (surge): qualitative tendencies (role of plenum volume and role of throttle) and some quantitative characteristics (frequency) are in a good agreement with experimental data. The application to transient simulations of gas cooled nuclear reactors is concentrated on the hypothetical 10 in. break accident. The results point out the importance of the location of the pipe rupture in a hypothetical break event. In some detailed cases, compressor surge and back flow through the circuit can occur. In order to be used in a design phase, a simplified model of surge has also been developed. This simplified model is applied to the gas fast reactor (GFR) and compared quite favourably with 1D axisymmetric simulation results

  13. A parsimonious approach to modeling animal movement data.

    Directory of Open Access Journals (Sweden)

    Yann Tremblay

    Full Text Available Animal tracking is a growing field in ecology and previous work has shown that simple speed filtering of tracking data is not sufficient and that improvement of tracking location estimates are possible. To date, this has required methods that are complicated and often time-consuming (state-space models, resulting in limited application of this technique and the potential for analysis errors due to poor understanding of the fundamental framework behind the approach. We describe and test an alternative and intuitive approach consisting of bootstrapping random walks biased by forward particles. The model uses recorded data accuracy estimates, and can assimilate other sources of data such as sea-surface temperature, bathymetry and/or physical boundaries. We tested our model using ARGOS and geolocation tracks of elephant seals that also carried GPS tags in addition to PTTs, enabling true validation. Among pinnipeds, elephant seals are extreme divers that spend little time at the surface, which considerably impact the quality of both ARGOS and light-based geolocation tracks. Despite such low overall quality tracks, our model provided location estimates within 4.0, 5.5 and 12.0 km of true location 50% of the time, and within 9, 10.5 and 20.0 km 90% of the time, for above, equal or below average elephant seal ARGOS track qualities, respectively. With geolocation data, 50% of errors were less than 104.8 km (<0.94 degrees, and 90% were less than 199.8 km (<1.80 degrees. Larger errors were due to lack of sea-surface temperature gradients. In addition we show that our model is flexible enough to solve the obstacle avoidance problem by assimilating high resolution coastline data. This reduced the number of invalid on-land location by almost an order of magnitude. The method is intuitive, flexible and efficient, promising extensive utilization in future research.

  14. Ensembles modeling approach to study Climate Change impacts on Wheat

    Science.gov (United States)

    Ahmed, Mukhtar; Claudio, Stöckle O.; Nelson, Roger; Higgins, Stewart

    2017-04-01

    Simulations of crop yield under climate variability are subject to uncertainties, and quantification of such uncertainties is essential for effective use of projected results in adaptation and mitigation strategies. In this study we evaluated the uncertainties related to crop-climate models using five crop growth simulation models (CropSyst, APSIM, DSSAT, STICS and EPIC) and 14 general circulation models (GCMs) for 2 representative concentration pathways (RCP) of atmospheric CO2 (4.5 and 8.5 W m-2) in the Pacific Northwest (PNW), USA. The aim was to assess how different process-based crop models could be used accurately for estimation of winter wheat growth, development and yield. Firstly, all models were calibrated for high rainfall, medium rainfall, low rainfall and irrigated sites in the PNW using 1979-2010 as the baseline period. Response variables were related to farm management and soil properties, and included crop phenology, leaf area index (LAI), biomass and grain yield of winter wheat. All five models were run from 2000 to 2100 using the 14 GCMs and 2 RCPs to evaluate the effect of future climate (rainfall, temperature and CO2) on winter wheat phenology, LAI, biomass, grain yield and harvest index. Simulated time to flowering and maturity was reduced in all models except EPIC with some level of uncertainty. All models generally predicted an increase in biomass and grain yield under elevated CO2 but this effect was more prominent under rainfed conditions than irrigation. However, there was uncertainty in the simulation of crop phenology, biomass and grain yield under 14 GCMs during three prediction periods (2030, 2050 and 2070). We concluded that to improve accuracy and consistency in simulating wheat growth dynamics and yield under a changing climate, a multimodel ensemble approach should be used.

  15. Vector-model-supported approach in prostate plan optimization

    International Nuclear Information System (INIS)

    Liu, Eva Sau Fan; Wu, Vincent Wing Cheung; Harris, Benjamin; Lehman, Margot; Pryor, David; Chan, Lawrence Wing Chi

    2017-01-01

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration

  16. Vector-model-supported approach in prostate plan optimization

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Eva Sau Fan [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong); Wu, Vincent Wing Cheung [Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong); Harris, Benjamin [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); Lehman, Margot; Pryor, David [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); School of Medicine, University of Queensland (Australia); Chan, Lawrence Wing Chi, E-mail: wing.chi.chan@polyu.edu.hk [Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong)

    2017-07-01

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration

  17. A NEW APPROACH OF DIGITAL BRIDGE SURFACE MODEL GENERATION

    Directory of Open Access Journals (Sweden)

    H. Ju

    2012-07-01

    Full Text Available Bridge areas present difficulties for orthophotos generation and to avoid “collapsed” bridges in the orthoimage, operator assistance is required to create the precise DBM (Digital Bridge Model, which is, subsequently, used for the orthoimage generation. In this paper, a new approach of DBM generation, based on fusing LiDAR (Light Detection And Ranging data and aerial imagery, is proposed. The no precise exterior orientation of the aerial image is required for the DBM generation. First, a coarse DBM is produced from LiDAR data. Then, a robust co-registration between LiDAR intensity and aerial image using the orientation constraint is performed. The from-coarse-to-fine hybrid co-registration approach includes LPFFT (Log-Polar Fast Fourier Transform, Harris Corners, PDF (Probability Density Function feature descriptor mean-shift matching, and RANSAC (RANdom Sample Consensus as main components. After that, bridge ROI (Region Of Interest from LiDAR data domain is projected to the aerial image domain as the ROI in the aerial image. Hough transform linear features are extracted in the aerial image ROI. For the straight bridge, the 1st order polynomial function is used; whereas, for the curved bridge, 2nd order polynomial function is used to fit those endpoints of Hough linear features. The last step is the transformation of the smooth bridge boundaries from aerial image back to LiDAR data domain and merge them with the coarse DBM. Based on our experiments, this new approach is capable of providing precise DBM which can be further merged with DTM (Digital Terrain Model derived from LiDAR data to obtain the precise DSM (Digital Surface Model. Such a precise DSM can be used to improve the orthophoto product quality.

  18. Data mining approach to model the diagnostic service management.

    Science.gov (United States)

    Lee, Sun-Mi; Lee, Ae-Kyung; Park, Il-Su

    2006-01-01

    Korea has National Health Insurance Program operated by the government-owned National Health Insurance Corporation, and diagnostic services are provided every two year for the insured and their family members. Developing a customer relationship management (CRM) system using data mining technology would be useful to improve the performance of diagnostic service programs. Under these circumstances, this study developed a model for diagnostic service management taking into account the characteristics of subjects using a data mining approach. This study could be further used to develop an automated CRM system contributing to the increase in the rate of receiving diagnostic services.

  19. Convex models and probabilistic approach of nonlinear fatigue failure

    International Nuclear Information System (INIS)

    Qiu Zhiping; Lin Qiang; Wang Xiaojun

    2008-01-01

    This paper is concerned with the nonlinear fatigue failure problem with uncertainties in the structural systems. In the present study, in order to solve the nonlinear problem by convex models, the theory of ellipsoidal algebra with the help of the thought of interval analysis is applied. In terms of the inclusion monotonic property of ellipsoidal functions, the nonlinear fatigue failure problem with uncertainties can be solved. A numerical example of 25-bar truss structures is given to illustrate the efficiency of the presented method in comparison with the probabilistic approach

  20. Algebraic approach to small-world network models

    Science.gov (United States)

    Rudolph-Lilith, Michelle; Muller, Lyle E.

    2014-01-01

    We introduce an analytic model for directed Watts-Strogatz small-world graphs and deduce an algebraic expression of its defining adjacency matrix. The latter is then used to calculate the small-world digraph's asymmetry index and clustering coefficient in an analytically exact fashion, valid nonasymptotically for all graph sizes. The proposed approach is general and can be applied to all algebraically well-defined graph-theoretical measures, thus allowing for an analytical investigation of finite-size small-world graphs.

  1. New Approaches in Reuseable Booster System Life Cycle Cost Modeling

    Science.gov (United States)

    Zapata, Edgar

    2013-01-01

    This paper presents the results of a 2012 life cycle cost (LCC) study of hybrid Reusable Booster Systems (RBS) conducted by NASA Kennedy Space Center (KSC) and the Air Force Research Laboratory (AFRL). The work included the creation of a new cost estimating model and an LCC analysis, building on past work where applicable, but emphasizing the integration of new approaches in life cycle cost estimation. Specifically, the inclusion of industry processes/practices and indirect costs were a new and significant part of the analysis. The focus of LCC estimation has traditionally been from the perspective of technology, design characteristics, and related factors such as reliability. Technology has informed the cost related support to decision makers interested in risk and budget insight. This traditional emphasis on technology occurs even though it is well established that complex aerospace systems costs are mostly about indirect costs, with likely only partial influence in these indirect costs being due to the more visible technology products. Organizational considerations, processes/practices, and indirect costs are traditionally derived ("wrapped") only by relationship to tangible product characteristics. This traditional approach works well as long as it is understood that no significant changes, and by relation no significant improvements, are being pursued in the area of either the government acquisition or industry?s indirect costs. In this sense then, most launch systems cost models ignore most costs. The alternative was implemented in this LCC study, whereby the approach considered technology and process/practices in balance, with as much detail for one as the other. This RBS LCC study has avoided point-designs, for now, instead emphasizing exploring the trade-space of potential technology advances joined with potential process/practice advances. Given the range of decisions, and all their combinations, it was necessary to create a model of the original model

  2. New Approaches in Reusable Booster System Life Cycle Cost Modeling

    Science.gov (United States)

    Zapata, Edgar

    2013-01-01

    This paper presents the results of a 2012 life cycle cost (LCC) study of hybrid Reusable Booster Systems (RBS) conducted by NASA Kennedy Space Center (KSC) and the Air Force Research Laboratory (AFRL). The work included the creation of a new cost estimating model and an LCC analysis, building on past work where applicable, but emphasizing the integration of new approaches in life cycle cost estimation. Specifically, the inclusion of industry processes/practices and indirect costs were a new and significant part of the analysis. The focus of LCC estimation has traditionally been from the perspective of technology, design characteristics, and related factors such as reliability. Technology has informed the cost related support to decision makers interested in risk and budget insight. This traditional emphasis on technology occurs even though it is well established that complex aerospace systems costs are mostly about indirect costs, with likely only partial influence in these indirect costs being due to the more visible technology products. Organizational considerations, processes/practices, and indirect costs are traditionally derived ("wrapped") only by relationship to tangible product characteristics. This traditional approach works well as long as it is understood that no significant changes, and by relation no significant improvements, are being pursued in the area of either the government acquisition or industry?s indirect costs. In this sense then, most launch systems cost models ignore most costs. The alternative was implemented in this LCC study, whereby the approach considered technology and process/practices in balance, with as much detail for one as the other. This RBS LCC study has avoided point-designs, for now, instead emphasizing exploring the trade-space of potential technology advances joined with potential process/practice advances. Given the range of decisions, and all their combinations, it was necessary to create a model of the original model

  3. Modelling and simulating retail management practices: a first approach

    OpenAIRE

    Siebers, Peer-Olaf; Aickelin, Uwe; Celia, Helen; Clegg, Chris

    2010-01-01

    Multi-agent systems offer a new and exciting way of understanding the world of work. We apply agent-based modeling and simulation to investigate a set of problems\\ud in a retail context. Specifically, we are working to understand the relationship between people management practices on the shop-floor and retail performance. Despite the fact we are working within a relatively novel and complex domain, it is clear that using an agent-based approach offers great potential for improving organizati...

  4. Truncated conformal space approach to scaling Lee-Yang model

    International Nuclear Information System (INIS)

    Yurov, V.P.; Zamolodchikov, Al.B.

    1989-01-01

    A numerical approach to 2D relativstic field theories is suggested. Considering a field theory model as an ultraviolet conformal field theory perturbed by suitable relevant scalar operator one studies it in finite volume (on a circle). The perturbed Hamiltonian acts in the conformal field theory space of states and its matrix elements can be extracted from the conformal field theory. Truncation of the space at reasonable level results in a finite dimensional problem for numerical analyses. The nonunitary field theory with the ultraviolet region controlled by the minimal conformal theory μ(2/5) is studied in detail. 9 refs.; 17 figs

  5. A coordination chemistry approach for modeling trace element adsorption

    International Nuclear Information System (INIS)

    Bourg, A.C.M.

    1986-01-01

    The traditional distribution coefficient, Kd, is highly dependent on the water chemistry and the surface properties of the geological system being studied and is therefore quite inappropriate for use in predictive models. Adsorption, one of the many processes included in Kd values, is described here using a coordination chemistry approach. The concept of adsorption of cationic trace elements by solid hydrous oxides can be applied to natural solids. The adsorption process is thus understood in terms of a classical complexation leading to the formation of surface (heterogeneous) ligands. Applications of this concept to some freshwater, estuarine and marine environments are discussed. (author)

  6. Design of Multithreaded Software The Entity-Life Modeling Approach

    CERN Document Server

    Sandén, Bo I

    2011-01-01

    This book assumes familiarity with threads (in a language such as Ada, C#, or Java) and introduces the entity-life modeling (ELM) design approach for certain kinds of multithreaded software. ELM focuses on "reactive systems," which continuously interact with the problem environment. These "reactive systems" include embedded systems, as well as such interactive systems as cruise controllers and automated teller machines.Part I covers two fundamentals: program-language thread support and state diagramming. These are necessary for understanding ELM and are provided primarily for reference. P

  7. Graphene growth process modeling: a physical-statistical approach

    Science.gov (United States)

    Wu, Jian; Huang, Qiang

    2014-09-01

    As a zero-band semiconductor, graphene is an attractive material for a wide variety of applications such as optoelectronics. Among various techniques developed for graphene synthesis, chemical vapor deposition on copper foils shows high potential for producing few-layer and large-area graphene. Since fabrication of high-quality graphene sheets requires the understanding of growth mechanisms, and methods of characterization and control of grain size of graphene flakes, analytical modeling of graphene growth process is therefore essential for controlled fabrication. The graphene growth process starts with randomly nucleated islands that gradually develop into complex shapes, grow in size, and eventually connect together to cover the copper foil. To model this complex process, we develop a physical-statistical approach under the assumption of self-similarity during graphene growth. The growth kinetics is uncovered by separating island shapes from area growth rate. We propose to characterize the area growth velocity using a confined exponential model, which not only has clear physical explanation, but also fits the real data well. For the shape modeling, we develop a parametric shape model which can be well explained by the angular-dependent growth rate. This work can provide useful information for the control and optimization of graphene growth process on Cu foil.

  8. An approach to modelling radiation damage by fast ionizing particles

    International Nuclear Information System (INIS)

    Thomas, G.E.

    1987-01-01

    The paper presents a statistical approach to modelling radiation damage in small biological structures such as enzymes, viruses, and some cells. Irreparable damage is assumed to be caused by the occurrence of ionizations within sensitive regions. For structures containing double-stranded DNA, one or more ionizations occurring within each strand of the DNA will cause inactivation; for simpler structures without double-stranded DNA a single ionization within the structure will be sufficient for inactivation. Damaging ionizations occur along tracks of primary irradiating particles or along tracks of secondary particles released at primary ionizations. An inactivation probability is derived for each damage mechanism, expressed in integral form in terms of the radius of the biological structure (assumed spherical), rate of ionization along primary tracks, and maximum energy for secondary particles. The performance of each model is assessed by comparing results from the model with those derived from data from various experimental studies extracted from the literature. For structures where a single ionization is sufficient for inactivation, the model gives qualitatively promising results; for larger more complex structures containing double-stranded DNA, the model requires further refinements. (author)

  9. Sweat loss prediction using a multi-model approach.

    Science.gov (United States)

    Xu, Xiaojiang; Santee, William R

    2011-07-01

    A new multi-model approach (MMA) for sweat loss prediction is proposed to improve prediction accuracy. MMA was computed as the average of sweat loss predicted by two existing thermoregulation models: i.e., the rational model SCENARIO and the empirical model Heat Strain Decision Aid (HSDA). Three independent physiological datasets, a total of 44 trials, were used to compare predictions by MMA, SCENARIO, and HSDA. The observed sweat losses were collected under different combinations of uniform ensembles, environmental conditions (15-40°C, RH 25-75%), and exercise intensities (250-600 W). Root mean square deviation (RMSD), residual plots, and paired t tests were used to compare predictions with observations. Overall, MMA reduced RMSD by 30-39% in comparison with either SCENARIO or HSDA, and increased the prediction accuracy to 66% from 34% or 55%. Of the MMA predictions, 70% fell within the range of mean observed value ± SD, while only 43% of SCENARIO and 50% of HSDA predictions fell within the same range. Paired t tests showed that differences between observations and MMA predictions were not significant, but differences between observations and SCENARIO or HSDA predictions were significantly different for two datasets. Thus, MMA predicted sweat loss more accurately than either of the two single models for the three datasets used. Future work will be to evaluate MMA using additional physiological data to expand the scope of populations and conditions.

  10. A new approach for modeling dry deposition velocity of particles

    Science.gov (United States)

    Giardina, M.; Buffa, P.

    2018-05-01

    The dry deposition process is recognized as an important pathway among the various removal processes of pollutants in the atmosphere. In this field, there are several models reported in the literature useful to predict the dry deposition velocity of particles of different diameters but many of them are not capable of representing dry deposition phenomena for several categories of pollutants and deposition surfaces. Moreover, their applications is valid for specific conditions and if the data in that application meet all of the assumptions required of the data used to define the model. In this paper a new dry deposition velocity model based on an electrical analogy schema is proposed to overcome the above issues. The dry deposition velocity is evaluated by assuming that the resistances that affect the particle flux in the Quasi-Laminar Sub-layers can be combined to take into account local features of the mutual influence of inertial impact processes and the turbulent one. Comparisons with the experimental data from literature indicate that the proposed model allows to capture with good agreement the main dry deposition phenomena for the examined environmental conditions and deposition surfaces to be determined. The proposed approach could be easily implemented within atmospheric dispersion modeling codes and efficiently addressing different deposition surfaces for several particle pollution.

  11. Numerical approaches to model perturbation fire in turing pattern formations

    Science.gov (United States)

    Campagna, R.; Brancaccio, M.; Cuomo, S.; Mazzoleni, S.; Russo, L.; Siettos, K.; Giannino, F.

    2017-11-01

    Turing patterns were observed in chemical, physical and biological systems described by coupled reaction-diffusion equations. Several models have been formulated proposing the water as the causal mechanism of vegetation pattern formation, but this isn't an exhaustive hypothesis in some natural environments. An alternative explanation has been related to the plant-soil negative feedback. In Marasco et al. [1] the authors explored the hypothesis that both mechanisms contribute in the formation of regular and irregular vegetation patterns. The mathematical model consists in three partial differential equations (PDEs) that take into account for a dynamic balance between biomass, water and toxic compounds. A numerical approach is mandatory also to investigate on the predictions of this kind of models. In this paper we start from the mathematical model described in [1], set the model parameters such that the biomass reaches a stable spatial pattern (spots) and present preliminary studies about the occurrence of perturbing events, such as wildfire, that can affect the regularity of the biomass configuration.

  12. Kinetics approach to modeling of polymer additive degradation in lubricants

    Institute of Scientific and Technical Information of China (English)

    llyaI.KUDISH; RubenG.AIRAPETYAN; Michael; J.; COVITCH

    2001-01-01

    A kinetics problem for a degrading polymer additive dissolved in a base stock is studied.The polymer degradation may be caused by the combination of such lubricant flow parameters aspressure, elongational strain rate, and temperature as well as lubricant viscosity and the polymercharacteristics (dissociation energy, bead radius, bond length, etc.). A fundamental approach tothe problem of modeling mechanically induced polymer degradation is proposed. The polymerdegradation is modeled on the basis of a kinetic equation for the density of the statistical distribu-tion of polymer molecules as a function of their molecular weight. The integrodifferential kineticequation for polymer degradation is solved numerically. The effects of pressure, elongational strainrate, temperature, and lubricant viscosity on the process of lubricant degradation are considered.The increase of pressure promotes fast degradation while the increase of temperature delaysdegradation. A comparison of a numerically calculated molecular weight distribution with an ex-perimental one obtained in bench tests showed that they are in excellent agreement with eachother.

  13. Global GPS Ionospheric Modelling Using Spherical Harmonic Expansion Approach

    Directory of Open Access Journals (Sweden)

    Byung-Kyu Choi

    2010-12-01

    Full Text Available In this study, we developed a global ionosphere model based on measurements from a worldwide network of global positioning system (GPS. The total number of the international GPS reference stations for development of ionospheric model is about 100 and the spherical harmonic expansion approach as a mathematical method was used. In order to produce the ionospheric total electron content (TEC based on grid form, we defined spatial resolution of 2.0 degree and 5.0 degree in latitude and longitude, respectively. Two-dimensional TEC maps were constructed within the interval of one hour, and have a high temporal resolution compared to global ionosphere maps which are produced by several analysis centers. As a result, we could detect the sudden increase of TEC by processing GPS observables on 29 October, 2003 when the massive solar flare took place.

  14. Static models, recursive estimators and the zero-variance approach

    KAUST Repository

    Rubino, Gerardo

    2016-01-07

    When evaluating dependability aspects of complex systems, most models belong to the static world, where time is not an explicit variable. These models suffer from the same problems than dynamic ones (stochastic processes), such as the frequent combinatorial explosion of the state spaces. In the Monte Carlo domain, on of the most significant difficulties is the rare event situation. In this talk, we describe this context and a recent technique that appears to be at the top performance level in the area, where we combined ideas that lead to very fast estimation procedures with another approach called zero-variance approximation. Both ideas produced a very efficient method that has the right theoretical property concerning robustness, the Bounded Relative Error one. Some examples illustrate the results.

  15. Comparison of different approaches of modelling in a masonry building

    Science.gov (United States)

    Saba, M.; Meloni, D.

    2017-12-01

    The present work has the objective to model a simple masonry building, through two different modelling methods in order to assess their validity in terms of evaluation of static stresses. Have been chosen two of the most commercial software used to address this kind of problem, which are of S.T.A. Data S.r.l. and Sismicad12 of Concrete S.r.l. While the 3Muri software adopts the Frame by Macro Elements Method (FME), which should be more schematic and more efficient, Sismicad12 software uses the Finite Element Method (FEM), which guarantees accurate results, with greater computational burden. Remarkably differences of the static stresses, for such a simple structure between the two approaches have been found, and an interesting comparison and analysis of the reasons is proposed.

  16. Ordered LOGIT Model approach for the determination of financial distress.

    Science.gov (United States)

    Kinay, B

    2010-01-01

    Nowadays, as a result of the global competition encountered, numerous companies come up against financial distresses. To predict and take proactive approaches for those problems is quite important. Thus, the prediction of crisis and financial distress is essential in terms of revealing the financial condition of companies. In this study, financial ratios relating to 156 industrial firms that are quoted in the Istanbul Stock Exchange are used and probabilities of financial distress are predicted by means of an ordered logit regression model. By means of Altman's Z Score, the dependent variable is composed by scaling the level of risk. Thus, a model that can compose an early warning system and predict financial distress is proposed.

  17. Model-Based Systems Engineering Approach to Managing Mass Margin

    Science.gov (United States)

    Chung, Seung H.; Bayer, Todd J.; Cole, Bjorn; Cooke, Brian; Dekens, Frank; Delp, Christopher; Lam, Doris

    2012-01-01

    When designing a flight system from concept through implementation, one of the fundamental systems engineering tasks ismanaging the mass margin and a mass equipment list (MEL) of the flight system. While generating a MEL and computing a mass margin is conceptually a trivial task, maintaining consistent and correct MELs and mass margins can be challenging due to the current practices of maintaining duplicate information in various forms, such as diagrams and tables, and in various media, such as files and emails. We have overcome this challenge through a model-based systems engineering (MBSE) approach within which we allow only a single-source-of-truth. In this paper we describe the modeling patternsused to capture the single-source-of-truth and the views that have been developed for the Europa Habitability Mission (EHM) project, a mission concept study, at the Jet Propulsion Laboratory (JPL).

  18. The Use of Modeling Approach for Teaching Exponential Functions

    Science.gov (United States)

    Nunes, L. F.; Prates, D. B.; da Silva, J. M.

    2017-12-01

    This work presents a discussion related to the teaching and learning of mathematical contents related to the study of exponential functions in a freshman students group enrolled in the first semester of the Science and Technology Bachelor’s (STB of the Federal University of Jequitinhonha and Mucuri Valleys (UFVJM). As a contextualization tool strongly mentioned in the literature, the modelling approach was used as an educational teaching tool to produce contextualization in the teaching-learning process of exponential functions to these students. In this sense, were used some simple models elaborated with the GeoGebra software and, to have a qualitative evaluation of the investigation and the results, was used Didactic Engineering as a methodology research. As a consequence of this detailed research, some interesting details about the teaching and learning process were observed, discussed and described.

  19. TRIF - an intermediate approach to environmental tritium modelling

    International Nuclear Information System (INIS)

    Higgins, N.A.

    1997-01-01

    The movement of tritium through the environment, from an initial atmospheric release to selected end points in the food chain, involves a series of closely coupled and complex processes which are, consequently, difficult to model. TRIF (tritium transfer into food) provides a semi-empirical approach to this transport problem, which can be adjusted to bridge the gap between simple steady state approximations and a fully coupled model of tritium dispersion and migration (Higgins et al., 1996). TRIF provides a time-dependent description of the behaviour of tritium in the form of tritium gas (HT) and tritiated water (HTO) as it enters and moves through the food chain into pasture, crops and animals. This includes a representation of the production and movement of organically bound tritium (OBT). (Author)

  20. Modeling Electronic Circular Dichroism within the Polarizable Embedding Approach

    DEFF Research Database (Denmark)

    Nørby, Morten S; Olsen, Jógvan Magnus Haugaard; Steinmann, Casper

    2017-01-01

    We present a systematic investigation of the key components needed to model single chromophore electronic circular dichroism (ECD) within the polarizable embedding (PE) approach. By relying on accurate forms of the embedding potential, where especially the inclusion of local field effects...... are in focus, we show that qualitative agreement between rotatory strength parameters calculated by full quantum mechanical calculations and the more efficient embedding calculations can be obtained. An important aspect in the computation of reliable absorption parameters is the need for conformational...... sampling. We show that a significant number of snapshots are needed to avoid artifacts in the calculated electronic circular dichroism parameters due to insufficient configurational sampling, thus highlighting the efficiency of the PE model....

  1. Seismic functional qualification of active mechanical and electrical components based on shaking table testing

    International Nuclear Information System (INIS)

    Jurukovski, D.

    1999-01-01

    The seismic testing for qualification of one sample of the NPP Kozloduy Control Panel type YKTC was carried out under Research Contract no: 8008/Rl, entitled: 'Seismic Functional Qualification of Active Mechanical and Electrical Components Based on Shaking Table Testing'. The tested specimen was selected by the Kozloduy NPP staff, Section 'TIA-2' (Technical Instrumentation and Automatics), however the seismic input parameters were selected by the NPP Kozloduy staff, Section HTS and SC (Hydro-Technical Systems and Engineering Structures). The applied methodology was developed by the Institute of Earthquake Engineering and Engineering Seismology staff. This report presents all relevant items related to the selected specimen seismic testing for seismic qualification such as: description of the tested specimen, mounting conditions on the shaking table, selection of seismic input parameters and creation of seismic excitations, description of the testing equipment, explanation of the applied methodology, 'on line' and 'off line' monitoring of the tested specimen, functioning capabilities, discussion of the results and their presentation and finally conclusions and recommendations. In this partial project report, two items are presented. The first item presents a review of the existing and used regulations for performing of the seismic and vibratory withstand testing of electro-mechanical equipment. The selection is made based on MEA, IEEE, IEC and former Soviet Union regulations. The second item presents the abstracts of all the tests performed at the Institute of Earthquake Engineering and Engineering Seismology in Skopje. The selected regulations, the experience of the Institute that has been gathered for the last seventeen years and some theoretical and experimental research will be the basis for further investigations for development of a synthesised methodology for seismic qualification of differently categorized equipment for nuclear power plants

  2. A Workflow-Oriented Approach To Propagation Models In Heliophysics

    Directory of Open Access Journals (Sweden)

    Gabriele Pierantoni

    2014-01-01

    Full Text Available The Sun is responsible for the eruption of billions of tons of plasma andthe generation of near light-speed particles that propagate throughout the solarsystem and beyond. If directed towards Earth, these events can be damaging toour tecnological infrastructure. Hence there is an effort to understand the causeof the eruptive events and how they propagate from Sun to Earth. However, thephysics governing their propagation is not well understood, so there is a need todevelop a theoretical description of their propagation, known as a PropagationModel, in order to predict when they may impact Earth. It is often difficultto define a single propagation model that correctly describes the physics ofsolar eruptive events, and even more difficult to implement models capable ofcatering for all these complexities and to validate them using real observational data.In this paper, we envisage that workflows offer both a theoretical andpractical framerwork for a novel approach to propagation models. We definea mathematical framework that aims at encompassing the different modalitieswith which workflows can be used, and provide a set of generic building blockswritten in the TAVERNA workflow language that users can use to build theirown propagation models. Finally we test both the theoretical model and thecomposite building blocks of the workflow with a real Science Use Case that wasdiscussed during the 4th CDAW (Coordinated Data Analysis Workshop eventheld by the HELIO project. We show that generic workflow building blocks canbe used to construct a propagation model that succesfully describes the transitof solar eruptive events toward Earth and predict a correct Earth-impact time

  3. A modeling approach for compounds affecting body composition.

    Science.gov (United States)

    Gennemark, Peter; Jansson-Löfmark, Rasmus; Hyberg, Gina; Wigstrand, Maria; Kakol-Palm, Dorota; Håkansson, Pernilla; Hovdal, Daniel; Brodin, Peter; Fritsch-Fredin, Maria; Antonsson, Madeleine; Ploj, Karolina; Gabrielsson, Johan

    2013-12-01

    Body composition and body mass are pivotal clinical endpoints in studies of welfare diseases. We present a combined effort of established and new mathematical models based on rigorous monitoring of energy intake (EI) and body mass in mice. Specifically, we parameterize a mechanistic turnover model based on the law of energy conservation coupled to a drug mechanism model. Key model variables are fat-free mass (FFM) and fat mass (FM), governed by EI and energy expenditure (EE). An empirical Forbes curve relating FFM to FM was derived experimentally for female C57BL/6 mice. The Forbes curve differs from a previously reported curve for male C57BL/6 mice, and we thoroughly analyse how the choice of Forbes curve impacts model predictions. The drug mechanism function acts on EI or EE, or both. Drug mechanism parameters (two to three parameters) and system parameters (up to six free parameters) could be estimated with good precision (coefficients of variation typically mass and FM changes at different drug provocations using a similar model for man. Surprisingly, model simulations indicate that an increase in EI (e.g. 10 %) was more efficient than an equal lowering of EI. Also, the relative change in body mass and FM is greater in man than in mouse at the same relative change in either EI or EE. We acknowledge that this assumes the same drug mechanism impact across the two species. A set of recommendations regarding the Forbes curve, vehicle control groups, dual action on EI and loss, and translational aspects are discussed. This quantitative approach significantly improves data interpretation, disease system understanding, safety assessment and translation across species.

  4. Reconstructing plateau icefields: Evaluating empirical and modelled approaches

    Science.gov (United States)

    Pearce, Danni; Rea, Brice; Barr, Iestyn

    2013-04-01

    Glacial landforms are widely utilised to reconstruct former glacier geometries with a common aim to estimate the Equilibrium Line Altitudes (ELAs) and from these, infer palaeoclimatic conditions. Such inferences may be studied on a regional scale and used to correlate climatic gradients across large distances (e.g., Europe). In Britain, the traditional approach uses geomorphological mapping with hand contouring to derive the palaeo-ice surface. Recently, ice surface modelling enables an equilibrium profile reconstruction tuned using the geomorphology. Both methods permit derivation of palaeo-climate but no study has compared the two methods for the same ice-mass. This is important because either approach may result in differences in glacier limits, ELAs and palaeo-climate. This research uses both methods to reconstruct a plateau icefield and quantifies the results from a cartographic and geometrical aspect. Detailed geomorphological mapping of the Tweedsmuir Hills in the Southern Uplands, Scotland (c. 320 km2) was conducted to examine the extent of Younger Dryas (YD; 12.9 -11.7 cal. ka BP) glaciation. Landform evidence indicates a plateau icefield configuration of two separate ice-masses during the YD covering an area c. 45 km2 and 25 km2. The interpreted age is supported by new radiocarbon dating of basal stratigraphies and Terrestrial Cosmogenic Nuclide Analysis (TCNA) of in situ boulders. Both techniques produce similar configurations however; the model results in a coarser resolution requiring further processing if a cartographic map is required. When landforms are absent or fragmentary (e.g., trimlines and lateral moraines), like in many accumulation zones on plateau icefields, the geomorphological approach increasingly relies on extrapolation between lines of evidence and on the individual's perception of how the ice-mass ought to look. In some locations this results in an underestimation of the ice surface compared to the modelled surface most likely due to

  5. The redshift distribution of cosmological samples: a forward modeling approach

    Energy Technology Data Exchange (ETDEWEB)

    Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina, E-mail: joerg.herbel@phys.ethz.ch, E-mail: tomasz.kacprzak@phys.ethz.ch, E-mail: adam.amara@phys.ethz.ch, E-mail: alexandre.refregier@phys.ethz.ch, E-mail: claudio.bruderer@phys.ethz.ch, E-mail: andrina.nicola@phys.ethz.ch [Institute for Astronomy, Department of Physics, ETH Zürich, Wolfgang-Pauli-Strasse 27, 8093 Zürich (Switzerland)

    2017-08-01

    Determining the redshift distribution n ( z ) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n ( z ) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc(UFig) (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n ( z ) distributions for the acceptable models. We demonstrate the method by determining n ( z ) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n ( z ) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.

  6. The redshift distribution of cosmological samples: a forward modeling approach

    Science.gov (United States)

    Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina

    2017-08-01

    Determining the redshift distribution n(z) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n(z) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc{UFig} (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n(z) distributions for the acceptable models. We demonstrate the method by determining n(z) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n(z) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.

  7. The redshift distribution of cosmological samples: a forward modeling approach

    International Nuclear Information System (INIS)

    Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina

    2017-01-01

    Determining the redshift distribution n ( z ) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n ( z ) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc(UFig) (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n ( z ) distributions for the acceptable models. We demonstrate the method by determining n ( z ) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n ( z ) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.

  8. A quality risk management model approach for cell therapy manufacturing.

    Science.gov (United States)

    Lopez, Fabio; Di Bartolo, Chiara; Piazza, Tommaso; Passannanti, Antonino; Gerlach, Jörg C; Gridelli, Bruno; Triolo, Fabio

    2010-12-01

    International regulatory authorities view risk management as an essential production need for the development of innovative, somatic cell-based therapies in regenerative medicine. The available risk management guidelines, however, provide little guidance on specific risk analysis approaches and procedures applicable in clinical cell therapy manufacturing. This raises a number of problems. Cell manufacturing is a poorly automated process, prone to operator-introduced variations, and affected by heterogeneity of the processed organs/tissues and lot-dependent variability of reagent (e.g., collagenase) efficiency. In this study, the principal challenges faced in a cell-based product manufacturing context (i.e., high dependence on human intervention and absence of reference standards for acceptable risk levels) are identified and addressed, and a risk management model approach applicable to manufacturing of cells for clinical use is described for the first time. The use of the heuristic and pseudo-quantitative failure mode and effect analysis/failure mode and critical effect analysis risk analysis technique associated with direct estimation of severity, occurrence, and detection is, in this specific context, as effective as, but more efficient than, the analytic hierarchy process. Moreover, a severity/occurrence matrix and Pareto analysis can be successfully adopted to identify priority failure modes on which to act to mitigate risks. The application of this approach to clinical cell therapy manufacturing in regenerative medicine is also discussed. © 2010 Society for Risk Analysis.

  9. [New approaches in pharmacology: numerical modelling and simulation].

    Science.gov (United States)

    Boissel, Jean-Pierre; Cucherat, Michel; Nony, Patrice; Dronne, Marie-Aimée; Kassaï, Behrouz; Chabaud, Sylvie

    2005-01-01

    The complexity of pathophysiological mechanisms is beyond the capabilities of traditional approaches. Many of the decision-making problems in public health, such as initiating mass screening, are complex. Progress in genomics and proteomics, and the resulting extraordinary increase in knowledge with regard to interactions between gene expression, the environment and behaviour, the customisation of risk factors and the need to combine therapies that individually have minimal though well documented efficacy, has led doctors to raise new questions: how to optimise choice and the application of therapeutic strategies at the individual rather than the group level, while taking into account all the available evidence? This is essentially a problem of complexity with dimensions similar to the previous ones: multiple parameters with nonlinear relationships between them, varying time scales that cannot be ignored etc. Numerical modelling and simulation (in silico investigations) have the potential to meet these challenges. Such approaches are considered in drug innovation and development. They require a multidisciplinary approach, and this will involve modification of the way research in pharmacology is conducted.

  10. THE FAIRSHARES MODEL: AN ETHICAL APPROACH TO SOCIAL ENTERPRISE DEVELOPMENT?

    Directory of Open Access Journals (Sweden)

    Rory James Ridley-Duff

    2015-07-01

    Full Text Available This paper is based on the keynote address to the 14th International Association of Public and Non-Profit Marketing (IAPNM conference. It explore the question "What impact do ethical values in the FairShares Model have on social entrepreneurial behaviour?" In the first part, three broad approaches to social enterprise are set out: co-operative and mutual enterprises (CMEs, social and responsible businesses (SRBs and charitable trading activities (CTAs. The ethics that guide each approach are examined to provide a conceptual framework for examining FairShares as a case study. In the second part, findings are scrutinised in terms of the ethical values and principles that are activated when FairShares is applied to practice. The paper contributes to knowledge by giving an example of the way OpenSource technology (Loomio has been used to translate 'espoused theories' into 'theories in use' to advance social enterprise development. The review of FairShares using the conceptual framework suggests there is a fourth approach based on multi-stakeholder co-operation to create 'associative democracy' in the workplace.

  11. Engineering approach to model and compute electric power markets settlements

    International Nuclear Information System (INIS)

    Kumar, J.; Petrov, V.

    2006-01-01

    Back-office accounting settlement activities are an important part of market operations in Independent System Operator (ISO) organizations. A potential way to measure ISO market design correctness is to analyze how well market price signals create incentives or penalties for creating an efficient market to achieve market design goals. Market settlement rules are an important tool for implementing price signals which are fed back to participants via the settlement activities of the ISO. ISO's are currently faced with the challenge of high volumes of data resulting from the increasing size of markets and ever-changing market designs, as well as the growing complexity of wholesale energy settlement business rules. This paper analyzed the problem and presented a practical engineering solution using an approach based on mathematical formulation and modeling of large scale calculations. The paper also presented critical comments on various differences in settlement design approaches to electrical power market design, as well as further areas of development. The paper provided a brief introduction to the wholesale energy market settlement systems and discussed problem formulation. An actual settlement implementation framework and discussion of the results and conclusions were also presented. It was concluded that a proper engineering approach to this domain can yield satisfying results by formalizing wholesale energy settlements. Significant improvements were observed in the initial preparation phase, scoping and effort estimation, implementation and testing. 5 refs., 2 figs

  12. Different approach to the modeling of nonfree particle diffusion

    Science.gov (United States)

    Buhl, Niels

    2018-03-01

    A new approach to the modeling of nonfree particle diffusion is presented. The approach uses a general setup based on geometric graphs (networks of curves), which means that particle diffusion in anything from arrays of barriers and pore networks to general geometric domains can be considered and that the (free random walk) central limit theorem can be generalized to cover also the nonfree case. The latter gives rise to a continuum-limit description of the diffusive motion where the effect of partially absorbing barriers is accounted for in a natural and non-Markovian way that, in contrast to the traditional approach, quantifies the absorptivity of a barrier in terms of a dimensionless parameter in the range 0 to 1. The generalized theorem gives two general analytic expressions for the continuum-limit propagator: an infinite sum of Gaussians and an infinite sum of plane waves. These expressions entail the known method-of-images and Laplace eigenfunction expansions as special cases and show how the presence of partially absorbing barriers can lead to phenomena such as line splitting and band gap formation in the plane wave wave-number spectrum.

  13. Visibility and accessbility of a component-based approach for ubiquitous computing applications : the e-gadgets case

    NARCIS (Netherlands)

    Mavrommati, I.; Kameas, A.; Markopoulos, P.

    2003-01-01

    The paper firstly presents the concepts and infrastructure developed within the extrovert-Gadgets research project, which enable end-users to realize Ubiquitous Computing applications. Then, it discusses user-acceptance considerations of the proposed concepts based on the outcome of an early

  14. Thin inclusion approach for modelling of heterogeneous conducting materials

    Science.gov (United States)

    Lavrov, Nikolay; Smirnova, Alevtina; Gorgun, Haluk; Sammes, Nigel

    Experimental data show that heterogeneous nanostructure of solid oxide and polymer electrolyte fuel cells could be approximated as an infinite set of fiber-like or penny-shaped inclusions in a continuous medium. Inclusions can be arranged in a cluster mode and regular or random order. In the newly proposed theoretical model of nanostructured material, the most attention is paid to the small aspect ratio of structural elements as well as to some model problems of electrostatics. The proposed integral equation for electric potential caused by the charge distributed over the single circular or elliptic cylindrical conductor of finite length, as a single unit of a nanostructured material, has been asymptotically simplified for the small aspect ratio and solved numerically. The result demonstrates that surface density changes slightly in the middle part of the thin domain and has boundary layers localized near the edges. It is anticipated, that contribution of boundary layer solution to the surface density is significant and cannot be governed by classic equation for smooth linear charge. The role of the cross-section shape is also investigated. Proposed approach is sufficiently simple, robust and allows extension to either regular or irregular system of various inclusions. This approach can be used for the development of the system of conducting inclusions, which are commonly present in nanostructured materials used for solid oxide and polymer electrolyte fuel cell (PEMFC) materials.

  15. A Modeling Approach for Plastic-Metal Laser Direct Joining

    Science.gov (United States)

    Lutey, Adrian H. A.; Fortunato, Alessandro; Ascari, Alessandro; Romoli, Luca

    2017-09-01

    Laser processing has been identified as a feasible approach to direct joining of metal and plastic components without the need for adhesives or mechanical fasteners. The present work sees development of a modeling approach for conduction and transmission laser direct joining of these materials based on multi-layer optical propagation theory and numerical heat flow simulation. The scope of this methodology is to predict process outcomes based on the calculated joint interface and upper surface temperatures. Three representative cases are considered for model verification, including conduction joining of PBT and aluminum alloy, transmission joining of optically transparent PET and stainless steel, and transmission joining of semi-transparent PA 66 and stainless steel. Conduction direct laser joining experiments are performed on black PBT and 6082 anticorodal aluminum alloy, achieving shear loads of over 2000 N with specimens of 2 mm thickness and 25 mm width. Comparison with simulation results shows that consistently high strength is achieved where the peak interface temperature is above the plastic degradation temperature. Comparison of transmission joining simulations and published experimental results confirms these findings and highlights the influence of plastic layer optical absorption on process feasibility.

  16. A modelling approach to designing microstructures in thermal barrier coatings

    International Nuclear Information System (INIS)

    Gupta, M.; Nylen, P.; Wigren, J.

    2013-01-01

    Thermomechanical properties of Thermal Barrier Coatings (TBCs) are strongly influenced by coating defects, such as delaminations and pores, thus making it essential to have a fundamental understanding of microstructure-property relationships in TBCs to produce a desired coating. Object-Oriented Finite element analysis (OOF) has been shown previously as an effective tool for evaluating thermal and mechanical material behaviour, as this method is capable of incorporating the inherent material microstructure as input to the model. In this work, OOF was used to predict the thermal conductivity and effective Young's modulus of TBC topcoats. A Design of Experiments (DoE) was conducted by varying selected parameters for spraying Yttria-Stabilised Zirconia (YSZ) topcoat. The microstructure was assessed with SEM, and image analysis was used to characterize the porosity content. The relationships between microstructural features and properties predicted by modelling are discussed. The microstructural features having the most beneficial effect on properties were sprayed with a different spray gun so as to verify the results obtained from modelling. Characterisation of the coatings included microstructure evaluation, thermal conductivity and lifetime measurements. The modelling approach in combination with experiments undertaken in this study was shown to be an effective way to achieve coatings with optimised thermo-mechanical properties.

  17. Numerical modelling of diesel spray using the Eulerian multiphase approach

    International Nuclear Information System (INIS)

    Vujanović, Milan; Petranović, Zvonimir; Edelbauer, Wilfried; Baleta, Jakov; Duić, Neven

    2015-01-01

    Highlights: • Numerical model for fuel disintegration was presented. • Fuel liquid and vapour were calculated. • Good agreement with experimental data was shown for various combinations of injection and chamber pressure. - Abstract: This research investigates high pressure diesel fuel injection into the combustion chamber by performing computational simulations using the Euler–Eulerian multiphase approach. Six diesel-like conditions were simulated for which the liquid fuel jet was injected into a pressurised inert environment (100% N 2 ) through a 205 μm nozzle hole. The analysis was focused on the liquid jet and vapour penetration, describing spatial and temporal spray evolution. For this purpose, an Eulerian multiphase model was implemented, variations of the sub-model coefficients were performed, and their impact on the spray formation was investigated. The final set of sub-model coefficients was applied to all operating points. Several simulations of high pressure diesel injections (50, 80, and 120 MPa) combined with different chamber pressures (5.4 and 7.2 MPa) were carried out and results were compared to the experimental data. The predicted results share a similar spray cloud shape for all conditions with the different vapour and liquid penetration length. The liquid penetration is shortened with the increase in chamber pressure, whilst the vapour penetration is more pronounced by elevating the injection pressure. Finally, the results showed good agreement when compared to the measured data, and yielded the correct trends for both the liquid and vapour penetrations under different operating conditions

  18. Testing adaptive toolbox models: a Bayesian hierarchical approach.

    Science.gov (United States)

    Scheibehenne, Benjamin; Rieskamp, Jörg; Wagenmakers, Eric-Jan

    2013-01-01

    Many theories of human cognition postulate that people are equipped with a repertoire of strategies to solve the tasks they face. This theoretical framework of a cognitive toolbox provides a plausible account of intra- and interindividual differences in human behavior. Unfortunately, it is often unclear how to rigorously test the toolbox framework. How can a toolbox model be quantitatively specified? How can the number of toolbox strategies be limited to prevent uncontrolled strategy sprawl? How can a toolbox model be formally tested against alternative theories? The authors show how these challenges can be met by using Bayesian inference techniques. By means of parameter recovery simulations and the analysis of empirical data across a variety of domains (i.e., judgment and decision making, children's cognitive development, function learning, and perceptual categorization), the authors illustrate how Bayesian inference techniques allow toolbox models to be quantitatively specified, strategy sprawl to be contained, and toolbox models to be rigorously tested against competing theories. The authors demonstrate that their approach applies at the individual level but can also be generalized to the group level with hierarchical Bayesian procedures. The suggested Bayesian inference techniques represent a theoretical and methodological advancement for toolbox theories of cognition and behavior.

  19. Replacement model of city bus: A dynamic programming approach

    Science.gov (United States)

    Arifin, Dadang; Yusuf, Edhi

    2017-06-01

    This paper aims to develop a replacement model of city bus vehicles operated in Bandung City. This study is driven from real cases encountered by the Damri Company in the efforts to improve services to the public. The replacement model propounds two policy alternatives: First, to maintain or keep the vehicles, and second is to replace them with new ones taking into account operating costs, revenue, salvage value, and acquisition cost of a new vehicle. A deterministic dynamic programming approach is used to solve the model. The optimization process was heuristically executed using empirical data of Perum Damri. The output of the model is to determine the replacement schedule and the best policy if the vehicle has passed the economic life. Based on the results, the technical life of the bus is approximately 20 years old, while the economic life is an average of 9 (nine) years. It means that after the bus is operated for 9 (nine) years, managers should consider the policy of rejuvenation.

  20. An interdisciplinary approach to modeling tritium transfer into the environment

    International Nuclear Information System (INIS)

    Galeriu, D; Melintescu, A.

    2005-01-01

    More robust radiological assessment models are required to support the safety case for the nuclear industry. Heavy water reactors, fuel processing plants, radiopharmaceutical factories, and the future fusion reactor, all have large tritium loads. While of low probability, large accidental tritium releases cannot be ignored. For Romania that uses CANDU600 for nuclear energy, tritium is the national radionuclide. Tritium enters directly into the life cycle in many physicochemical forms. Tritiated water (HTO) is leaked from most nuclear installations but is partially converted into organically bound tritium (OBT) through plant and animal metabolic processes. Hydrogen and carbon are elemental components of major nutrients and animal tissues and their radioisotopes must be modeled differently from those of most other radionuclides. Tritium transfer from atmosphere to plant and conversion into organically bound tritium strongly depend on plant characteristics, season, and weather conditions. In order to cope with this large variability and avoid expensive calibration experiments, we developed a model using knowledge of plant physiology, agrometeorology, soil sciences, hydrology, and climatology. The transfer of tritiated water to plant was modeled with resistance approach including sparse canopy. The canopy resistance was modeled using the Jarvis-Calvet approach modified in order to make direct use of the canopy photosynthesis rate. The crop growth model WOFOST was used for photosynthesis rate both for canopy resistance and formation of organically bound tritium. Using this formalism, the tritium transfer parameters were directly linked to processes and parameters known from agricultural sciences. Model predictions for tritium in wheat were close to a factor two, according to experimental data without any calibration. The model was also tested on rice and soybean and can be applied for various plants and environmental conditions. For sparse canopy, the model used coupled

  1. Predicting future glacial lakes in Austria using different modelling approaches

    Science.gov (United States)

    Otto, Jan-Christoph; Helfricht, Kay; Prasicek, Günther; Buckel, Johannes; Keuschnig, Markus

    2017-04-01

    Glacier retreat is one of the most apparent consequences of temperature rise in the 20th and 21th centuries in the European Alps. In Austria, more than 240 new lakes have formed in glacier forefields since the Little Ice Age. A similar signal is reported from many mountain areas worldwide. Glacial lakes can constitute important environmental and socio-economic impacts on high mountain systems including water resource management, sediment delivery, natural hazards, energy production and tourism. Their development significantly modifies the landscape configuration and visual appearance of high mountain areas. Knowledge on the location, number and extent of these future lakes can be used to assess potential impacts on high mountain geo-ecosystems and upland-lowland interactions. Information on new lakes is critical to appraise emerging threads and potentials for society. The recent development of regional ice thickness models and their combination with high resolution glacier surface data allows predicting the topography below current glaciers by subtracting ice thickness from glacier surface. Analyzing these modelled glacier bed surfaces reveals overdeepenings that represent potential locations for future lakes. In order to predict the location of future glacial lakes below recent glaciers in the Austrian Alps we apply different ice thickness models using high resolution terrain data and glacier outlines. The results are compared and validated with ice thickness data from geophysical surveys. Additionally, we run the models on three different glacier extents provided by the Austrian Glacier Inventories from 1969, 1998 and 2006. Results of this historical glacier extent modelling are compared to existing glacier lakes and discussed focusing on geomorphological impacts on lake evolution. We discuss model performance and observed differences in the results in order to assess the approach for a realistic prediction of future lake locations. The presentation delivers

  2. An Approach to Enforcing Clark-Wilson Model in Role-based Access Control Model

    Institute of Scientific and Technical Information of China (English)

    LIANGBin; SHIWenchang; SUNYufang; SUNBo

    2004-01-01

    Using one security model to enforce another is a prospective solution to multi-policy support. In this paper, an approach to the enforcing Clark-Wilson data integrity model in the Role-based access control (RBAC) model is proposed. An enforcement construction with great feasibility is presented. In this construction, a direct way to enforce the Clark-Wilson model is provided, the corresponding relations among users, transformation procedures, and constrained data items are strengthened; the concepts of task and subtask are introduced to enhance the support to least-privilege. The proposed approach widens the applicability of RBAC. The theoretical foundation for adopting Clark-Wilson model in a RBAC system with small cost is offered to meet the requirements of multi-policy support and policy flexibility.

  3. Modelling the Heat Consumption in District Heating Systems using a Grey-box approach

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Madsen, Henrik

    2006-01-01

    identification of an overall model structure followed by data-based modelling, whereby the details of the model are identified. This approach is sometimes called grey-box modelling, but the specific approach used here does not require states to be specified. Overall, the paper demonstrates the power of the grey......-box approach. (c) 2005 Elsevier B.V. All rights reserved....

  4. A Flexible Component based Access Control Architecture for OPeNDAP Services

    Science.gov (United States)

    Kershaw, Philip; Ananthakrishnan, Rachana; Cinquini, Luca; Lawrence, Bryan; Pascoe, Stephen; Siebenlist, Frank

    2010-05-01

    Network data access services such as OPeNDAP enable widespread access to data across user communities. However, without ready means to restrict access to data for such services, data providers and data owners are constrained from making their data more widely available. Even with such capability, the range of different security technologies available can make interoperability between services and user client tools a challenge. OPeNDAP is a key data access service in the infrastructure under development to support the CMIP5 (Couple Model Intercomparison Project Phase 5). The work is being carried out as part of an international collaboration including the US Earth System Grid and Curator projects and the EU funded IS-ENES and Metafor projects. This infrastructure will bring together Petabytes of climate model data and associated metadata from over twenty modelling centres around the world in a federation with a core archive mirrored at three data centres. A security system is needed to meet the requirements of organisations responsible for model data including the ability to restrict data access to registered users, keep them up to date with changes to data and services, audit access and protect finite computing resources. Individual organisations have existing tools and services such as OPeNDAP with which users in the climate research community are already familiar. The security system should overlay access control in a way which maintains the usability and ease of access to these services. The BADC (British Atmospheric Data Centre) has been working in collaboration with the Earth System Grid development team and partner organisations to develop the security architecture. OpenID and MyProxy were selected at an early stage in the ESG project to provide single sign-on capability across the federation of participating organisations. Building on the existing OPeNDAP specification an architecture based on pluggable server side components has been developed at the BADC

  5. Optimal design of hydrometric monitoring networks with dynamic components based on Information Theory

    Science.gov (United States)

    Alfonso, Leonardo; Chacon, Juan; Solomatine, Dimitri

    2016-04-01

    The EC-FP7 WeSenseIt project proposes the development of a Citizen Observatory of Water, aiming at enhancing environmental monitoring and forecasting with the help of citizens equipped with low-cost sensors and personal devices such as smartphones and smart umbrellas. In this regard, Citizen Observatories may complement the limited data availability in terms of spatial and temporal density, which is of interest, among other areas, to improve hydraulic and hydrological models. At this point, the following question arises: how can citizens, who are part of a citizen observatory, be optimally guided so that the data they collect and send is useful to improve modelling and water management? This research proposes a new methodology to identify the optimal location and timing of potential observations coming from moving sensors of hydrological variables. The methodology is based on Information Theory, which has been widely used in hydrometric monitoring design [1-4]. In particular, the concepts of Joint Entropy, as a measure of the amount of information that is contained in a set of random variables, which, in our case, correspond to the time series of hydrological variables captured at given locations in a catchment. The methodology presented is a step forward in the state of the art because it solves the multiobjective optimisation problem of getting simultaneously the minimum number of informative and non-redundant sensors needed for a given time, so that the best configuration of monitoring sites is found at every particular moment in time. To this end, the existing algorithms have been improved to make them efficient. The method is applied to cases in The Netherlands, UK and Italy and proves to have a great potential to complement the existing in-situ monitoring networks. [1] Alfonso, L., A. Lobbrecht, and R. Price (2010a), Information theory-based approach for location of monitoring water level gauges in polders, Water Resour. Res., 46(3), W03528 [2] Alfonso, L., A

  6. Comparison between Two Linear Supervised Learning Machines' Methods with Principle Component Based Methods for the Spectrofluorimetric Determination of Agomelatine and Its Degradants.

    Science.gov (United States)

    Elkhoudary, Mahmoud M; Naguib, Ibrahim A; Abdel Salam, Randa A; Hadad, Ghada M

    2017-05-01

    Four accurate, sensitive and reliable stability indicating chemometric methods were developed for the quantitative determination of Agomelatine (AGM) whether in pure form or in pharmaceutical formulations. Two supervised learning machines' methods; linear artificial neural networks (PC-linANN) preceded by principle component analysis and linear support vector regression (linSVR), were compared with two principle component based methods; principle component regression (PCR) as well as partial least squares (PLS) for the spectrofluorimetric determination of AGM and its degradants. The results showed the benefits behind using linear learning machines' methods and the inherent merits of their algorithms in handling overlapped noisy spectral data especially during the challenging determination of AGM alkaline and acidic degradants (DG1 and DG2). Relative mean squared error of prediction (RMSEP) for the proposed models in the determination of AGM were 1.68, 1.72, 0.68 and 0.22 for PCR, PLS, SVR and PC-linANN; respectively. The results showed the superiority of supervised learning machines' methods over principle component based methods. Besides, the results suggested that linANN is the method of choice for determination of components in low amounts with similar overlapped spectra and narrow linearity range. Comparison between the proposed chemometric models and a reported HPLC method revealed the comparable performance and quantification power of the proposed models.

  7. Banking Crisis Early Warning Model based on a Bayesian Model Averaging Approach

    Directory of Open Access Journals (Sweden)

    Taha Zaghdoudi

    2016-08-01

    Full Text Available The succession of banking crises in which most have resulted in huge economic and financial losses, prompted several authors to study their determinants. These authors constructed early warning models to prevent their occurring. It is in this same vein as our study takes its inspiration. In particular, we have developed a warning model of banking crises based on a Bayesian approach. The results of this approach have allowed us to identify the involvement of the decline in bank profitability, deterioration of the competitiveness of the traditional intermediation, banking concentration and higher real interest rates in triggering bank crisis.

  8. Modeling healthcare authorization and claim submissions using the openEHR dual-model approach

    Science.gov (United States)

    2011-01-01

    Background The TISS standard is a set of mandatory forms and electronic messages for healthcare authorization and claim submissions among healthcare plans and providers in Brazil. It is not based on formal models as the new generation of health informatics standards suggests. The objective of this paper is to model the TISS in terms of the openEHR archetype-based approach and integrate it into a patient-centered EHR architecture. Methods Three approaches were adopted to model TISS. In the first approach, a set of archetypes was designed using ENTRY subclasses. In the second one, a set of archetypes was designed using exclusively ADMIN_ENTRY and CLUSTERs as their root classes. In the third approach, the openEHR ADMIN_ENTRY is extended with classes designed for authorization and claim submissions, and an ISM_TRANSITION attribute is added to the COMPOSITION class. Another set of archetypes was designed based on this model. For all three approaches, templates were designed to represent the TISS forms. Results The archetypes based on the openEHR RM (Reference Model) can represent all TISS data structures. The extended model adds subclasses and an attribute to the COMPOSITION class to represent information on authorization and claim submissions. The archetypes based on all three approaches have similar structures, although rooted in different classes. The extended openEHR RM model is more semantically aligned with the concepts involved in a claim submission, but may disrupt interoperability with other systems and the current tools must be adapted to deal with it. Conclusions Modeling the TISS standard by means of the openEHR approach makes it aligned with ISO recommendations and provides a solid foundation on which the TISS can evolve. Although there are few administrative archetypes available, the openEHR RM is expressive enough to represent the TISS standard. This paper focuses on the TISS but its results may be extended to other billing processes. A complete

  9. Modeling healthcare authorization and claim submissions using the openEHR dual-model approach

    Directory of Open Access Journals (Sweden)

    Freire Sergio M

    2011-10-01

    Full Text Available Abstract Background The TISS standard is a set of mandatory forms and electronic messages for healthcare authorization and claim submissions among healthcare plans and providers in Brazil. It is not based on formal models as the new generation of health informatics standards suggests. The objective of this paper is to model the TISS in terms of the openEHR archetype-based approach and integrate it into a patient-centered EHR architecture. Methods Three approaches were adopted to model TISS. In the first approach, a set of archetypes was designed using ENTRY subclasses. In the second one, a set of archetypes was designed using exclusively ADMIN_ENTRY and CLUSTERs as their root classes. In the third approach, the openEHR ADMIN_ENTRY is extended with classes designed for authorization and claim submissions, and an ISM_TRANSITION attribute is added to the COMPOSITION class. Another set of archetypes was designed based on this model. For all three approaches, templates were designed to represent the TISS forms. Results The archetypes based on the openEHR RM (Reference Model can represent all TISS data structures. The extended model adds subclasses and an attribute to the COMPOSITION class to represent information on authorization and claim submissions. The archetypes based on all three approaches have similar structures, although rooted in different classes. The extended openEHR RM model is more semantically aligned with the concepts involved in a claim submission, but may disrupt interoperability with other systems and the current tools must be adapted to deal with it. Conclusions Modeling the TISS standard by means of the openEHR approach makes it aligned with ISO recommendations and provides a solid foundation on which the TISS can evolve. Although there are few administrative archetypes available, the openEHR RM is expressive enough to represent the TISS standard. This paper focuses on the TISS but its results may be extended to other billing

  10. Export of microplastics from land to sea. A modelling approach.

    Science.gov (United States)

    Siegfried, Max; Koelmans, Albert A; Besseling, Ellen; Kroeze, Carolien

    2017-12-15

    Quantifying the transport of plastic debris from river to sea is crucial for assessing the risks of plastic debris to human health and the environment. We present a global modelling approach to analyse the composition and quantity of point-source microplastic fluxes from European rivers to the sea. The model accounts for different types and sources of microplastics entering river systems via point sources. We combine information on these sources with information on sewage management and plastic retention during river transport for the largest European rivers. Sources of microplastics include personal care products, laundry, household dust and tyre and road wear particles (TRWP). Most of the modelled microplastics exported by rivers to seas are synthetic polymers from TRWP (42%) and plastic-based textiles abraded during laundry (29%). Smaller sources are synthetic polymers and plastic fibres in household dust (19%) and microbeads in personal care products (10%). Microplastic export differs largely among European rivers, as a result of differences in socio-economic development and technological status of sewage treatment facilities. About two-thirds of the microplastics modelled in this study flow into the Mediterranean and Black Sea. This can be explained by the relatively low microplastic removal efficiency of sewage treatment plants in the river basins draining into these two seas. Sewage treatment is generally more efficient in river basins draining into the North Sea, the Baltic Sea and the Atlantic Ocean. We use our model to explore future trends up to the year 2050. Our scenarios indicate that in the future river export of microplastics may increase in some river basins, but decrease in others. Remarkably, for many basins we calculate a reduction in river export of microplastics from point-sources, mainly due to an anticipated improvement in sewage treatment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. BioModels: expanding horizons to include more modelling approaches and formats.

    Science.gov (United States)

    Glont, Mihai; Nguyen, Tung V N; Graesslin, Martin; Hälke, Robert; Ali, Raza; Schramm, Jochen; Wimalaratne, Sarala M; Kothamachu, Varun B; Rodriguez, Nicolas; Swat, Maciej J; Eils, Jurgen; Eils, Roland; Laibe, Camille; Malik-Sheriff, Rahuman S; Chelliah, Vijayalakshmi; Le Novère, Nicolas; Hermjakob, Henning

    2018-01-04

    BioModels serves as a central repository of mathematical models representing biological processes. It offers a platform to make mathematical models easily shareable across the systems modelling community, thereby supporting model reuse. To facilitate hosting a broader range of model formats derived from diverse modelling approaches and tools, a new infrastructure for BioModels has been developed that is available at http://www.ebi.ac.uk/biomodels. This new system allows submitting and sharing of a wide range of models with improved support for formats other than SBML. It also offers a version-control backed environment in which authors and curators can work collaboratively to curate models. This article summarises the features available in the current system and discusses the potential benefit they offer to the users over the previous system. In summary, the new portal broadens the scope of models accepted in BioModels and supports collaborative model curation which is crucial for model reproducibility and sharing. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Mesoscopic approach to modeling elastic-plastic polycrystalline material behaviour

    International Nuclear Information System (INIS)

    Kovac, M.; Cizelj, L.

    2001-01-01

    Extreme loadings during severe accident conditions might cause failure or rupture of the pressure boundary of a reactor coolant system. Reliable estimation of the extreme deformations can be crucial to determine the consequences of such an accident. One of important drawbacks of classical continuum mechanics is idealization of inhomogenous microstructure of materials. This paper discusses the mesoscopic approach to modeling the elastic-plastic behavior of a polycrystalline material. The main idea is to divide the continuum (e.g., polycrystalline aggregate) into a set of sub-continua (grains). The overall properties of the polycrystalline aggregate are therefore determined by the number of grains in the aggregate and properties of randomly shaped and oriented grains. The random grain structure is modeled with Voronoi tessellation and random orientations of crystal lattices are assumed. The elastic behavior of monocrystal grains is assumed to be anisotropic. Crystal plasticity is used to describe plastic response of monocrystal grains. Finite element method is used to obtain numerical solutions of strain and stress fields. The analysis is limited to two-dimensional models.(author)

  13. Implementation of a Novel Educational Modeling Approach for Cloud Computing

    Directory of Open Access Journals (Sweden)

    Sara Ouahabi

    2014-12-01

    Full Text Available The Cloud model is cost-effective because customers pay for their actual usage without upfront costs, and scalable because it can be used more or less depending on the customers’ needs. Due to its advantages, Cloud has been increasingly adopted in many areas, such as banking, e-commerce, retail industry, and academy. For education, cloud is used to manage the large volume of educational resources produced across many universities in the cloud. Keep interoperability between content in an inter-university Cloud is not always easy. Diffusion of pedagogical contents on the Cloud by different E-Learning institutions leads to heterogeneous content which influence the quality of teaching offered by university to teachers and learners. From this reason, comes the idea of using IMS-LD coupled with metadata in the cloud. This paper presents the implementation of our previous educational modeling by combining an application in J2EE with Reload editor that consists of modeling heterogeneous content in the cloud. The new approach that we followed focuses on keeping interoperability between Educational Cloud content for teachers and learners and facilitates the task of identification, reuse, sharing, adapting teaching and learning resources in the Cloud.

  14. MASKED AREAS IN SHEAR PEAK STATISTICS: A FORWARD MODELING APPROACH

    International Nuclear Information System (INIS)

    Bard, D.; Kratochvil, J. M.; Dawson, W.

    2016-01-01

    The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impact of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance

  15. A Dynamic Approach to Modeling Dependence Between Human Failure Events

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald Laurids [Idaho National Laboratory

    2015-09-01

    In practice, most HRA methods use direct dependence from THERP—the notion that error be- gets error, and one human failure event (HFE) may increase the likelihood of subsequent HFEs. In this paper, we approach dependence from a simulation perspective in which the effects of human errors are dynamically modeled. There are three key concepts that play into this modeling: (1) Errors are driven by performance shaping factors (PSFs). In this context, the error propagation is not a result of the presence of an HFE yielding overall increases in subsequent HFEs. Rather, it is shared PSFs that cause dependence. (2) PSFs have qualities of lag and latency. These two qualities are not currently considered in HRA methods that use PSFs. Yet, to model the effects of PSFs, it is not simply a matter of identifying the discrete effects of a particular PSF on performance. The effects of PSFs must be considered temporally, as the PSFs will have a range of effects across the event sequence. (3) Finally, there is the concept of error spilling. When PSFs are activated, they not only have temporal effects but also lateral effects on other PSFs, leading to emergent errors. This paper presents the framework for tying together these dynamic dependence concepts.

  16. A Bayesian approach to the modelling of α Cen A

    Science.gov (United States)

    Bazot, M.; Bourguignon, S.; Christensen-Dalsgaard, J.

    2012-12-01

    Determining the physical characteristics of a star is an inverse problem consisting of estimating the parameters of models for the stellar structure and evolution, and knowing certain observable quantities. We use a Bayesian approach to solve this problem for α Cen A, which allows us to incorporate prior information on the parameters to be estimated, in order to better constrain the problem. Our strategy is based on the use of a Markov chain Monte Carlo (MCMC) algorithm to estimate the posterior probability densities of the stellar parameters: mass, age, initial chemical composition, etc. We use the stellar evolutionary code ASTEC to model the star. To constrain this model both seismic and non-seismic observations were considered. Several different strategies were tested to fit these values, using either two free parameters or five free parameters in ASTEC. We are thus able to show evidence that MCMC methods become efficient with respect to more classical grid-based strategies when the number of parameters increases. The results of our MCMC algorithm allow us to derive estimates for the stellar parameters and robust uncertainties thanks to the statistical analysis of the posterior probability densities. We are also able to compute odds for the presence of a convective core in α Cen A. When using core-sensitive seismic observational constraints, these can rise above ˜40 per cent. The comparison of results to previous studies also indicates that these seismic constraints are of critical importance for our knowledge of the structure of this star.

  17. Parameter Estimation of Structural Equation Modeling Using Bayesian Approach

    Directory of Open Access Journals (Sweden)

    Dewi Kurnia Sari

    2016-05-01

    Full Text Available Leadership is a process of influencing, directing or giving an example of employees in order to achieve the objectives of the organization and is a key element in the effectiveness of the organization. In addition to the style of leadership, the success of an organization or company in achieving its objectives can also be influenced by the commitment of the organization. Where organizational commitment is a commitment created by each individual for the betterment of the organization. The purpose of this research is to obtain a model of leadership style and organizational commitment to job satisfaction and employee performance, and determine the factors that influence job satisfaction and employee performance using SEM with Bayesian approach. This research was conducted at Statistics FNI employees in Malang, with 15 people. The result of this study showed that the measurement model, all significant indicators measure each latent variable. Meanwhile in the structural model, it was concluded there are a significant difference between the variables of Leadership Style and Organizational Commitment toward Job Satisfaction directly as well as a significant difference between Job Satisfaction on Employee Performance. As for the influence of Leadership Style and variable Organizational Commitment on Employee Performance directly declared insignificant.

  18. Hybrid empirical--theoretical approach to modeling uranium adsorption

    International Nuclear Information System (INIS)

    Hull, Larry C.; Grossman, Christopher; Fjeld, Robert A.; Coates, John T.; Elzerman, Alan W.

    2004-01-01

    An estimated 330 metric tons of U are buried in the radioactive waste Subsurface Disposal Area (SDA) at the Idaho National Engineering and Environmental Laboratory (INEEL). An assessment of U transport parameters is being performed to decrease the uncertainty in risk and dose predictions derived from computer simulations of U fate and transport to the underlying Snake River Plain Aquifer. Uranium adsorption isotherms were measured for 14 sediment samples collected from sedimentary interbeds underlying the SDA. The adsorption data were fit with a Freundlich isotherm. The Freundlich n parameter is statistically identical for all 14 sediment samples and the Freundlich K f parameter is correlated to sediment surface area (r 2 =0.80). These findings suggest an efficient approach to material characterization and implementation of a spatially variable reactive transport model that requires only the measurement of sediment surface area. To expand the potential applicability of the measured isotherms, a model is derived from the empirical observations by incorporating concepts from surface complexation theory to account for the effects of solution chemistry. The resulting model is then used to predict the range of adsorption conditions to be expected in the vadose zone at the SDA based on the range in measured pore water chemistry. Adsorption in the deep vadose zone is predicted to be stronger than in near-surface sediments because the total dissolved carbonate decreases with depth

  19. Experimental oligopolies modeling: A dynamic approach based on heterogeneous behaviors

    Science.gov (United States)

    Cerboni Baiardi, Lorenzo; Naimzada, Ahmad K.

    2018-05-01

    In the rank of behavioral rules, imitation-based heuristics has received special attention in economics (see [14] and [12]). In particular, imitative behavior is considered in order to understand the evidences arising in experimental oligopolies which reveal that the Cournot-Nash equilibrium does not emerge as unique outcome and show that an important component of the production at the competitive level is observed (see e.g.[1,3,9] or [7,10]). By considering the pioneering groundbreaking approach of [2], we build a dynamical model of linear oligopolies where heterogeneous decision mechanisms of players are made explicit. In particular, we consider two different types of quantity setting players characterized by different decision mechanisms that coexist and operate simultaneously: agents that adaptively adjust their choices towards the direction that increases their profit are embedded with imitator agents. The latter ones use a particular form of proportional imitation rule that considers the awareness about the presence of strategic interactions. It is noteworthy that the Cournot-Nash outcome is a stationary state of our models. Our thesis is that the chaotic dynamics arousing from a dynamical model, where heterogeneous players are considered, are capable to qualitatively reproduce the outcomes of experimental oligopolies.

  20. Mobile phone use while driving: a hybrid modeling approach.

    Science.gov (United States)

    Márquez, Luis; Cantillo, Víctor; Arellana, Julián

    2015-05-01

    The analysis of the effects that mobile phone use produces while driving is a topic of great interest for the scientific community. There is consensus that using a mobile phone while driving increases the risk of exposure to traffic accidents. The purpose of this research is to evaluate the drivers' behavior when they decide whether or not to use a mobile phone while driving. For that, a hybrid modeling approach that integrates a choice model with the latent variable "risk perception" was used. It was found that workers and individuals with the highest education level are more prone to use a mobile phone while driving than others. Also, "risk perception" is higher among individuals who have been previously fined and people who have been in an accident or almost been in an accident. It was also found that the tendency to use mobile phones while driving increases when the traffic speed reduces, but it decreases when the fine increases. Even though the urgency of the phone call is the most important explanatory variable in the choice model, the cost of the fine is an important attribute in order to control mobile phone use while driving. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Sulfur Deactivation of NOx Storage Catalysts: A Multiscale Modeling Approach

    Directory of Open Access Journals (Sweden)

    Rankovic N.

    2013-09-01

    Full Text Available Lean NOx Trap (LNT catalysts, a promising solution for reducing the noxious nitrogen oxide emissions from the lean burn and Diesel engines, are technologically limited by the presence of sulfur in the exhaust gas stream. Sulfur stemming from both fuels and lubricating oils is oxidized during the combustion event and mainly exists as SOx (SO2 and SO3 in the exhaust. Sulfur oxides interact strongly with the NOx trapping material of a LNT to form thermodynamically favored sulfate species, consequently leading to the blockage of NOx sorption sites and altering the catalyst operation. Molecular and kinetic modeling represent a valuable tool for predicting system behavior and evaluating catalytic performances. The present paper demonstrates how fundamental ab initio calculations can be used as a valuable source for designing kinetic models developed in the IFP Exhaust library, intended for vehicle simulations. The concrete example we chose to illustrate our approach was SO3 adsorption on the model NOx storage material, BaO. SO3 adsorption was described for various sites (terraces, surface steps and kinks and bulk for a closer description of a real storage material. Additional rate and sensitivity analyses provided a deeper understanding of the poisoning phenomena.

  2. Environmental risk assessment of biocidal products: identification of relevant components and reliability of a component-based mixture assessment.

    Science.gov (United States)

    Coors, Anja; Vollmar, Pia; Heim, Jennifer; Sacher, Frank; Kehrer, Anja

    2018-01-01

    Biocidal products are mixtures of one or more active substances (a.s.) and a broad range of formulation additives. There is regulatory guidance currently under development that will specify how the combined effects of the a.s. and any relevant formulation additives shall be considered in the environmental risk assessment of biocidal products. The default option is a component-based approach (CBA) by which the toxicity of the product is predicted from the toxicity of 'relevant' components using concentration addition. Hence, unequivocal and practicable criteria are required for identifying the 'relevant' components to ensure protectiveness of the CBA, while avoiding unnecessary workload resulting from including by default components that do not significantly contribute to the product toxicity. The present study evaluated a set of different criteria for identifying 'relevant' components using confidential information on the composition of 21 wood preservative products. Theoretical approaches were complemented by experimentally testing the aquatic toxicity of seven selected products. For three of the seven tested products, the toxicity was underestimated for the most sensitive endpoint (green algae) by more than factor 2 if only the a.s. were considered in the CBA. This illustrated the necessity of including at least some additives along with the a.s. Considering additives that were deemed 'relevant' by the tentatively established criteria reduced the underestimation of toxicity for two of the three products. A lack of data for one specific additive was identified as the most likely reason for the remaining toxicity underestimation of the third product. In three other products, toxicity was overestimated by more than factor 2, while prediction and observation fitted well for the seventh product. Considering all additives in the prediction increased only the degree of overestimation. Supported by theoretical calculations and experimental verifications, the present

  3. A Simple Approach to Account for Climate Model Interdependence in Multi-Model Ensembles

    Science.gov (United States)

    Herger, N.; Abramowitz, G.; Angelil, O. M.; Knutti, R.; Sanderson, B.

    2016-12-01

    Multi-model ensembles are an indispensable tool for future climate projection and its uncertainty quantification. Ensembles containing multiple climate models generally have increased skill, consistency and reliability. Due to the lack of agreed-on alternatives, most scientists use the equally-weighted multi-model mean as they subscribe to model democracy ("one model, one vote").Different research groups are known to share sections of code, parameterizations in their model, literature, or even whole model components. Therefore, individual model runs do not represent truly independent estimates. Ignoring this dependence structure might lead to a false model consensus, wrong estimation of uncertainty and effective number of independent models.Here, we present a way to partially address this problem by selecting a subset of CMIP5 model runs so that its climatological mean minimizes the RMSE compared to a given observation product. Due to the cancelling out of errors, regional biases in the ensemble mean are reduced significantly.Using a model-as-truth experiment we demonstrate that those regional biases persist into the future and we are not fitting noise, thus providing improved observationally-constrained projections of the 21st century. The optimally selected ensemble shows significantly higher global mean surface temperature projections than the original ensemble, where all the model runs are considered. Moreover, the spread is decreased well beyond that expected from the decreased ensemble size.Several previous studies have recommended an ensemble selection approach based on performance ranking of the model runs. Here, we show that this approach can perform even worse than randomly selecting ensemble members and can thus be harmful. We suggest that accounting for interdependence in the ensemble selection process is a necessary step for robust projections for use in impact assessments, adaptation and mitigation of climate change.

  4. Assessing testamentary and decision-making capacity: Approaches and models.

    Science.gov (United States)

    Purser, Kelly; Rosenfeld, Tuly

    2015-09-01

    The need for better and more accurate assessments of testamentary and decision-making capacity grows as Australian society ages and incidences of mentally disabling conditions increase. Capacity is a legal determination, but one on which medical opinion is increasingly being sought. The difficulties inherent within capacity assessments are exacerbated by the ad hoc approaches adopted by legal and medical professionals based on individual knowledge and skill, as well as the numerous assessment paradigms that exist. This can negatively affect the quality of assessments, and results in confusion as to the best way to assess capacity. This article begins by assessing the nature of capacity. The most common general assessment models used in Australia are then discussed, as are the practical challenges associated with capacity assessment. The article concludes by suggesting a way forward to satisfactorily assess legal capacity given the significant ramifications of getting it wrong.

  5. Benchmarking of computer codes and approaches for modeling exposure scenarios

    International Nuclear Information System (INIS)

    Seitz, R.R.; Rittmann, P.D.; Wood, M.I.; Cook, J.R.

    1994-08-01

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided

  6. A Dynamic Linear Modeling Approach to Public Policy Change

    DEFF Research Database (Denmark)

    Loftis, Matthew; Mortensen, Peter Bjerre

    2017-01-01

    Theories of public policy change, despite their differences, converge on one point of strong agreement. The relationship between policy and its causes can and does change over time. This consensus yields numerous empirical implications, but our standard analytical tools are inadequate for testing...... them. As a result, the dynamic and transformative relationships predicted by policy theories have been left largely unexplored in time-series analysis of public policy. This paper introduces dynamic linear modeling (DLM) as a useful statistical tool for exploring time-varying relationships in public...... policy. The paper offers a detailed exposition of the DLM approach and illustrates its usefulness with a time series analysis of U.S. defense policy from 1957-2010. The results point the way for a new attention to dynamics in the policy process and the paper concludes with a discussion of how...

  7. Advanced Computational Modeling Approaches for Shock Response Prediction

    Science.gov (United States)

    Derkevorkian, Armen; Kolaini, Ali R.; Peterson, Lee

    2015-01-01

    Motivation: (1) The activation of pyroshock devices such as explosives, separation nuts, pin-pullers, etc. produces high frequency transient structural response, typically from few tens of Hz to several hundreds of kHz. (2) Lack of reliable analytical tools makes the prediction of appropriate design and qualification test levels a challenge. (3) In the past few decades, several attempts have been made to develop methodologies that predict the structural responses to shock environments. (4) Currently, there is no validated approach that is viable to predict shock environments overt the full frequency range (i.e., 100 Hz to 10 kHz). Scope: (1) Model, analyze, and interpret space structural systems with complex interfaces and discontinuities, subjected to shock loads. (2) Assess the viability of a suite of numerical tools to simulate transient, non-linear solid mechanics and structural dynamics problems, such as shock wave propagation.

  8. Systems approaches to computational modeling of the oral microbiome

    Directory of Open Access Journals (Sweden)

    Dimiter V. Dimitrov

    2013-07-01

    Full Text Available Current microbiome research has generated tremendous amounts of data providing snapshots of molecular activity in a variety of organisms, environments, and cell types. However, turning this knowledge into whole system level of understanding on pathways and processes has proven to be a challenging task. In this review we highlight the applicability of bioinformatics and visualization techniques to large collections of data in order to better understand the information that contains related diet – oral microbiome – host mucosal transcriptome interactions. In particular we focus on systems biology of Porphyromonas gingivalis in the context of high throughput computational methods tightly integrated with translational systems medicine. Those approaches have applications for both basic research, where we can direct specific laboratory experiments in model organisms and cell cultures, to human disease, where we can validate new mechanisms and biomarkers for prevention and treatment of chronic disorders

  9. Informing Public Perceptions About Climate Change: A 'Mental Models' Approach.

    Science.gov (United States)

    Wong-Parodi, Gabrielle; Bruine de Bruin, Wändi

    2017-10-01

    As the specter of climate change looms on the horizon, people will face complex decisions about whether to support climate change policies and how to cope with climate change impacts on their lives. Without some grasp of the relevant science, they may find it hard to make informed decisions. Climate experts therefore face the ethical need to effectively communicate to non-expert audiences. Unfortunately, climate experts may inadvertently violate the maxims of effective communication, which require sharing communications that are truthful, brief, relevant, clear, and tested for effectiveness. Here, we discuss the 'mental models' approach towards developing communications, which aims to help experts to meet the maxims of effective communications, and to better inform the judgments and decisions of non-expert audiences.

  10. Agents, Bayes, and Climatic Risks - a modular modelling approach

    Directory of Open Access Journals (Sweden)

    A. Haas

    2005-01-01

    Full Text Available When insurance firms, energy companies, governments, NGOs, and other agents strive to manage climatic risks, it is by no way clear what the aggregate outcome should and will be. As a framework for investigating this subject, we present the LAGOM model family. It is based on modules depicting learning social agents. For managing climate risks, our agents use second order probabilities and update them by means of a Bayesian mechanism while differing in priors and risk aversion. The interactions between these modules and the aggregate outcomes of their actions are implemented using further modules. The software system is implemented as a series of parallel processes using the CIAMn approach. It is possible to couple modules irrespective of the language they are written in, the operating system under which they are run, and the physical location of the machine.

  11. Agents, Bayes, and Climatic Risks - a modular modelling approach

    Science.gov (United States)

    Haas, A.; Jaeger, C.

    2005-08-01

    When insurance firms, energy companies, governments, NGOs, and other agents strive to manage climatic risks, it is by no way clear what the aggregate outcome should and will be. As a framework for investigating this subject, we present the LAGOM model family. It is based on modules depicting learning social agents. For managing climate risks, our agents use second order probabilities and update them by means of a Bayesian mechanism while differing in priors and risk aversion. The interactions between these modules and the aggregate outcomes of their actions are implemented using further modules. The software system is implemented as a series of parallel processes using the CIAMn approach. It is possible to couple modules irrespective of the language they are written in, the operating system under which they are run, and the physical location of the machine.

  12. A path integral approach to the Hodgkin-Huxley model

    Science.gov (United States)

    Baravalle, Roman; Rosso, Osvaldo A.; Montani, Fernando

    2017-11-01

    To understand how single neurons process sensory information, it is necessary to develop suitable stochastic models to describe the response variability of the recorded spike trains. Spikes in a given neuron are produced by the synergistic action of sodium and potassium of the voltage-dependent channels that open or close the gates. Hodgkin and Huxley (HH) equations describe the ionic mechanisms underlying the initiation and propagation of action potentials, through a set of nonlinear ordinary differential equations that approximate the electrical characteristics of the excitable cell. Path integral provides an adequate approach to compute quantities such as transition probabilities, and any stochastic system can be expressed in terms of this methodology. We use the technique of path integrals to determine the analytical solution driven by a non-Gaussian colored noise when considering the HH equations as a stochastic system. The different neuronal dynamics are investigated by estimating the path integral solutions driven by a non-Gaussian colored noise q. More specifically we take into account the correlational structures of the complex neuronal signals not just by estimating the transition probability associated to the Gaussian approach of the stochastic HH equations, but instead considering much more subtle processes accounting for the non-Gaussian noise that could be induced by the surrounding neural network and by feedforward correlations. This allows us to investigate the underlying dynamics of the neural system when different scenarios of noise correlations are considered.

  13. A Gaussian graphical model approach to climate networks

    Energy Technology Data Exchange (ETDEWEB)

    Zerenner, Tanja, E-mail: tanjaz@uni-bonn.de [Meteorological Institute, University of Bonn, Auf dem Hügel 20, 53121 Bonn (Germany); Friederichs, Petra; Hense, Andreas [Meteorological Institute, University of Bonn, Auf dem Hügel 20, 53121 Bonn (Germany); Interdisciplinary Center for Complex Systems, University of Bonn, Brühler Straße 7, 53119 Bonn (Germany); Lehnertz, Klaus [Department of Epileptology, University of Bonn, Sigmund-Freud-Straße 25, 53105 Bonn (Germany); Helmholtz Institute for Radiation and Nuclear Physics, University of Bonn, Nussallee 14-16, 53115 Bonn (Germany); Interdisciplinary Center for Complex Systems, University of Bonn, Brühler Straße 7, 53119 Bonn (Germany)

    2014-06-15

    Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately.

  14. A Unified Approach to Model-Based Planning and Execution

    Science.gov (United States)

    Muscettola, Nicola; Dorais, Gregory A.; Fry, Chuck; Levinson, Richard; Plaunt, Christian; Norvig, Peter (Technical Monitor)

    2000-01-01

    Writing autonomous software is complex, requiring the coordination of functionally and technologically diverse software modules. System and mission engineers must rely on specialists familiar with the different software modules to translate requirements into application software. Also, each module often encodes the same requirement in different forms. The results are high costs and reduced reliability due to the difficulty of tracking discrepancies in these encodings. In this paper we describe a unified approach to planning and execution that we believe provides a unified representational and computational framework for an autonomous agent. We identify the four main components whose interplay provides the basis for the agent's autonomous behavior: the domain model, the plan database, the plan running module, and the planner modules. This representational and problem solving approach can be applied at all levels of the architecture of a complex agent, such as Remote Agent. In the rest of the paper we briefly describe the Remote Agent architecture. The new agent architecture proposed here aims at achieving the full Remote Agent functionality. We then give the fundamental ideas behind the new agent architecture and point out some implication of the structure of the architecture, mainly in the area of reactivity and interaction between reactive and deliberative decision making. We conclude with related work and current status.

  15. A Gaussian graphical model approach to climate networks

    International Nuclear Information System (INIS)

    Zerenner, Tanja; Friederichs, Petra; Hense, Andreas; Lehnertz, Klaus

    2014-01-01

    Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately

  16. Using graph approach for managing connectivity in integrative landscape modelling

    Science.gov (United States)

    Rabotin, Michael; Fabre, Jean-Christophe; Libres, Aline; Lagacherie, Philippe; Crevoisier, David; Moussa, Roger

    2013-04-01

    In cultivated landscapes, a lot of landscape elements such as field boundaries, ditches or banks strongly impact water flows, mass and energy fluxes. At the watershed scale, these impacts are strongly conditionned by the connectivity of these landscape elements. An accurate representation of these elements and of their complex spatial arrangements is therefore of great importance for modelling and predicting these impacts.We developped in the framework of the OpenFLUID platform (Software Environment for Modelling Fluxes in Landscapes) a digital landscape representation that takes into account the spatial variabilities and connectivities of diverse landscape elements through the application of the graph theory concepts. The proposed landscape representation consider spatial units connected together to represent the flux exchanges or any other information exchanges. Each spatial unit of the landscape is represented as a node of a graph and relations between units as graph connections. The connections are of two types - parent-child connection and up/downstream connection - which allows OpenFLUID to handle hierarchical graphs. Connections can also carry informations and graph evolution during simulation is possible (connections or elements modifications). This graph approach allows a better genericity on landscape representation, a management of complex connections and facilitate development of new landscape representation algorithms. Graph management is fully operational in OpenFLUID for developers or modelers ; and several graph tools are available such as graph traversal algorithms or graph displays. Graph representation can be managed i) manually by the user (for example in simple catchments) through XML-based files in easily editable and readable format or ii) by using methods of the OpenFLUID-landr library which is an OpenFLUID library relying on common open-source spatial libraries (ogr vector, geos topologic vector and gdal raster libraries). Open

  17. Validation of an employee satisfaction model: A structural equation model approach

    OpenAIRE

    Ophillia Ledimo; Nico Martins

    2015-01-01

    The purpose of this study was to validate an employee satisfaction model and to determine the relationships between the different dimensions of the concept, using the structural equation modelling approach (SEM). A cross-sectional quantitative survey design was used to collect data from a random sample of (n=759) permanent employees of a parastatal organisation. Data was collected using the Employee Satisfaction Survey (ESS) to measure employee satisfaction dimensions. Following the steps of ...

  18. Personalization of models with many model parameters: an efficient sensitivity analysis approach.

    Science.gov (United States)

    Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T

    2015-10-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. Copyright © 2015 John Wiley & Sons, Ltd.

  19. Numerical modeling of underground openings behavior with a viscoplastic approach

    International Nuclear Information System (INIS)

    Kleine, A.

    2007-01-01

    Nature is complex and must be approached in total modesty by engineers seeking to predict the behavior of underground openings. The engineering of industrial projects in underground situations, with high economic and social stakes (Alpine mountain crossings, nuclear waste repository), mean striving to gain better understanding of the behavioral mechanisms of the openings to be designed. This improvement necessarily involves better physical representativeness of macroscopic mechanisms and the provision of prediction tools suited to the expectations and needs of the engineers. The calculation tools developed in this work is in step with this concern for satisfying industrial needs and developing knowledge related to the rheology of geo-materials. These developments led to the proposing of a mechanical constitutive model, suited to lightly fissured rocks, comparable to continuous media, while integrating more particularly the effect of time. Thread of this study, the problematics ensued from the subject of the thesis is precisely about the rock mass delayed behavior in numerical modeling and its consequences on underground openings design. Based on physical concepts of reference, defined in several scales (macro/meso/micro), the developed constitutive model is translated in a mathematical formalism in order to be numerically implemented. Numerical applications presented as illustrations fall mainly within the framework of nuclear waste repository problems. They concern two very different configurations of underground openings: the AECL's underground canadian laboratory, excavated in the Lac du Bonnet granite, and the GMR gallery of Bure's laboratory (Meuse/Haute-Marne), dug in argillaceous rock. In this two cases, this constitutive model use highlights the gains to be obtained from allowing for delayed behavior regarding the accuracy of numerical tunnel behavior predictions in the short, medium and long terms. (author)

  20. Healthcare waste management: an interpretive structural modeling approach.

    Science.gov (United States)

    Thakur, Vikas; Anbanandam, Ramesh

    2016-06-13

    Purpose - The World Health Organization identified infectious healthcare waste as a threat to the environment and human health. India's current medical waste management system has limitations, which lead to ineffective and inefficient waste handling practices. Hence, the purpose of this paper is to: first, identify the important barriers that hinder India's healthcare waste management (HCWM) systems; second, classify operational, tactical and strategical issues to discuss the managerial implications at different management levels; and third, define all barriers into four quadrants depending upon their driving and dependence power. Design/methodology/approach - India's HCWM system barriers were identified through the literature, field surveys and brainstorming sessions. Interrelationships among all the barriers were analyzed using interpretive structural modeling (ISM). Fuzzy-Matrice d'Impacts Croisés Multiplication Appliquée á un Classement (MICMAC) analysis was used to classify HCWM barriers into four groups. Findings - In total, 25 HCWM system barriers were identified and placed in 12 different ISM model hierarchy levels. Fuzzy-MICMAC analysis placed eight barriers in the second quadrant, five in third and 12 in fourth quadrant to define their relative ISM model importance. Research limitations/implications - The study's main limitation is that all the barriers were identified through a field survey and barnstorming sessions conducted only in Uttarakhand, Northern State, India. The problems in implementing HCWM practices may differ with the region, hence, the current study needs to be replicated in different Indian states to define the waste disposal strategies for hospitals. Practical implications - The model will help hospital managers and Pollution Control Boards, to plan their resources accordingly and make policies, targeting key performance areas. Originality/value - The study is the first attempt to identify India's HCWM system barriers and prioritize