WorldWideScience

Sample records for component-based modelling approach

  1. Generalized structured component analysis a component-based approach to structural equation modeling

    CERN Document Server

    Hwang, Heungsun

    2014-01-01

    Winner of the 2015 Sugiyama Meiko Award (Publication Award) of the Behaviormetric Society of Japan Developed by the authors, generalized structured component analysis is an alternative to two longstanding approaches to structural equation modeling: covariance structure analysis and partial least squares path modeling. Generalized structured component analysis allows researchers to evaluate the adequacy of a model as a whole, compare a model to alternative specifications, and conduct complex analyses in a straightforward manner. Generalized Structured Component Analysis: A Component-Based Approach to Structural Equation Modeling provides a detailed account of this novel statistical methodology and its various extensions. The authors present the theoretical underpinnings of generalized structured component analysis and demonstrate how it can be applied to various empirical examples. The book enables quantitative methodologists, applied researchers, and practitioners to grasp the basic concepts behind this new a...

  2. Modelling Creativity: Identifying Key Components through a Corpus-Based Approach.

    Science.gov (United States)

    Jordanous, Anna; Keller, Bill

    2016-01-01

    Creativity is a complex, multi-faceted concept encompassing a variety of related aspects, abilities, properties and behaviours. If we wish to study creativity scientifically, then a tractable and well-articulated model of creativity is required. Such a model would be of great value to researchers investigating the nature of creativity and in particular, those concerned with the evaluation of creative practice. This paper describes a unique approach to developing a suitable model of how creative behaviour emerges that is based on the words people use to describe the concept. Using techniques from the field of statistical natural language processing, we identify a collection of fourteen key components of creativity through an analysis of a corpus of academic papers on the topic. Words are identified which appear significantly often in connection with discussions of the concept. Using a measure of lexical similarity to help cluster these words, a number of distinct themes emerge, which collectively contribute to a comprehensive and multi-perspective model of creativity. The components provide an ontology of creativity: a set of building blocks which can be used to model creative practice in a variety of domains. The components have been employed in two case studies to evaluate the creativity of computational systems and have proven useful in articulating achievements of this work and directions for further research.

  3. Feedback loops and temporal misalignment in component-based hydrologic modeling

    Science.gov (United States)

    Elag, Mostafa M.; Goodall, Jonathan L.; Castronova, Anthony M.

    2011-12-01

    In component-based modeling, a complex system is represented as a series of loosely integrated components with defined interfaces and data exchanges that allow the components to be coupled together through shared boundary conditions. Although the component-based paradigm is commonly used in software engineering, it has only recently been applied for modeling hydrologic and earth systems. As a result, research is needed to test and verify the applicability of the approach for modeling hydrologic systems. The objective of this work was therefore to investigate two aspects of using component-based software architecture for hydrologic modeling: (1) simulation of feedback loops between components that share a boundary condition and (2) data transfers between temporally misaligned model components. We investigated these topics using a simple case study where diffusion of mass is modeled across a water-sediment interface. We simulated the multimedia system using two model components, one for the water and one for the sediment, coupled using the Open Modeling Interface (OpenMI) standard. The results were compared with a more conventional numerical approach for solving the system where the domain is represented by a single multidimensional array. Results showed that the component-based approach was able to produce the same results obtained with the more conventional numerical approach. When the two components were temporally misaligned, we explored the use of different interpolation schemes to minimize mass balance error within the coupled system. The outcome of this work provides evidence that component-based modeling can be used to simulate complicated feedback loops between systems and guidance as to how different interpolation schemes minimize mass balance error introduced when components are temporally misaligned.

  4. Penalising Model Component Complexity: A Principled, Practical Approach to Constructing Priors

    KAUST Repository

    Simpson, Daniel

    2017-04-06

    In this paper, we introduce a new concept for constructing prior distributions. We exploit the natural nested structure inherent to many model components, which defines the model component to be a flexible extension of a base model. Proper priors are defined to penalise the complexity induced by deviating from the simpler base model and are formulated after the input of a user-defined scaling parameter for that model component, both in the univariate and the multivariate case. These priors are invariant to repa-rameterisations, have a natural connection to Jeffreys\\' priors, are designed to support Occam\\'s razor and seem to have excellent robustness properties, all which are highly desirable and allow us to use this approach to define default prior distributions. Through examples and theoretical results, we demonstrate the appropriateness of this approach and how it can be applied in various situations.

  5. Penalising Model Component Complexity: A Principled, Practical Approach to Constructing Priors

    KAUST Repository

    Simpson, Daniel; Rue, Haavard; Riebler, Andrea; Martins, Thiago G.; Sø rbye, Sigrunn H.

    2017-01-01

    In this paper, we introduce a new concept for constructing prior distributions. We exploit the natural nested structure inherent to many model components, which defines the model component to be a flexible extension of a base model. Proper priors are defined to penalise the complexity induced by deviating from the simpler base model and are formulated after the input of a user-defined scaling parameter for that model component, both in the univariate and the multivariate case. These priors are invariant to repa-rameterisations, have a natural connection to Jeffreys' priors, are designed to support Occam's razor and seem to have excellent robustness properties, all which are highly desirable and allow us to use this approach to define default prior distributions. Through examples and theoretical results, we demonstrate the appropriateness of this approach and how it can be applied in various situations.

  6. Feature-based component model for design of embedded systems

    Science.gov (United States)

    Zha, Xuan Fang; Sriram, Ram D.

    2004-11-01

    An embedded system is a hybrid of hardware and software, which combines software's flexibility and hardware real-time performance. Embedded systems can be considered as assemblies of hardware and software components. An Open Embedded System Model (OESM) is currently being developed at NIST to provide a standard representation and exchange protocol for embedded systems and system-level design, simulation, and testing information. This paper proposes an approach to representing an embedded system feature-based model in OESM, i.e., Open Embedded System Feature Model (OESFM), addressing models of embedded system artifacts, embedded system components, embedded system features, and embedded system configuration/assembly. The approach provides an object-oriented UML (Unified Modeling Language) representation for the embedded system feature model and defines an extension to the NIST Core Product Model. The model provides a feature-based component framework allowing the designer to develop a virtual embedded system prototype through assembling virtual components. The framework not only provides a formal precise model of the embedded system prototype but also offers the possibility of designing variation of prototypes whose members are derived by changing certain virtual components with different features. A case study example is discussed to illustrate the embedded system model.

  7. Refinement and verification in component-based model-driven design

    DEFF Research Database (Denmark)

    Chen, Zhenbang; Liu, Zhiming; Ravn, Anders Peter

    2009-01-01

    Modern software development is complex as it has to deal with many different and yet related aspects of applications. In practical software engineering this is now handled by a UML-like modelling approach in which different aspects are modelled by different notations. Component-based and object-o...... be integrated in computer-aided software engineering (CASE) tools for adding formally supported checking, transformation and generation facilities.......Modern software development is complex as it has to deal with many different and yet related aspects of applications. In practical software engineering this is now handled by a UML-like modelling approach in which different aspects are modelled by different notations. Component-based and object...

  8. A systematic approach for component-based software development

    NARCIS (Netherlands)

    Guareis de farias, Cléver; van Sinderen, Marten J.; Ferreira Pires, Luis

    2000-01-01

    Component-based software development enables the construction of software artefacts by assembling prefabricated, configurable and independently evolving building blocks, called software components. This paper presents an approach for the development of component-based software artefacts. This

  9. A Combined Approach for Component-based Software Design

    NARCIS (Netherlands)

    Guareis de farias, Cléver; van Sinderen, Marten J.; Ferreira Pires, Luis; Quartel, Dick; Baldoni, R.

    2001-01-01

    Component-based software development enables the construction of software artefacts by assembling binary units of production, distribution and deployment, the so-called software components. Several approaches addressing component-based development have been proposed recently. Most of these

  10. A Component Based Approach to Scientific Workflow Management

    CERN Document Server

    Le Goff, Jean-Marie; Baker, Nigel; Brooks, Peter; McClatchey, Richard

    2001-01-01

    CRISTAL is a distributed scientific workflow system used in the manufacturing and production phases of HEP experiment construction at CERN. The CRISTAL project has studied the use of a description driven approach, using meta- modelling techniques, to manage the evolving needs of a large physics community. Interest from such diverse communities as bio-informatics and manufacturing has motivated the CRISTAL team to re-engineer the system to customize functionality according to end user requirements but maximize software reuse in the process. The next generation CRISTAL vision is to build a generic component architecture from which a complete software product line can be generated according to the particular needs of the target enterprise. This paper discusses the issues of adopting a component product line based approach and our experiences of software reuse.

  11. A component based approach to scientific workflow management

    International Nuclear Information System (INIS)

    Baker, N.; Brooks, P.; McClatchey, R.; Kovacs, Z.; LeGoff, J.-M.

    2001-01-01

    CRISTAL is a distributed scientific workflow system used in the manufacturing and production phases of HEP experiment construction at CERN. The CRISTAL project has studied the use of a description driven approach, using meta-modelling techniques, to manage the evolving needs of a large physics community. Interest from such diverse communities as bio-informatics and manufacturing has motivated the CRISTAL team to re-engineer the system to customize functionality according to end user requirements but maximize software reuse in the process. The next generation CRISTAL vision is to build a generic component architecture from which a complete software product line can be generated according to the particular needs of the target enterprise. This paper discusses the issues of adopting a component product line based approach and our experiences of software reuse

  12. SLS Navigation Model-Based Design Approach

    Science.gov (United States)

    Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas

    2018-01-01

    The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and

  13. Algorithmic fault tree construction by component-based system modeling

    International Nuclear Information System (INIS)

    Majdara, Aref; Wakabayashi, Toshio

    2008-01-01

    Computer-aided fault tree generation can be easier, faster and less vulnerable to errors than the conventional manual fault tree construction. In this paper, a new approach for algorithmic fault tree generation is presented. The method mainly consists of a component-based system modeling procedure an a trace-back algorithm for fault tree synthesis. Components, as the building blocks of systems, are modeled using function tables and state transition tables. The proposed method can be used for a wide range of systems with various kinds of components, if an inclusive component database is developed. (author)

  14. A probabilistic model for component-based shape synthesis

    KAUST Repository

    Kalogerakis, Evangelos

    2012-07-01

    We present an approach to synthesizing shapes from complex domains, by identifying new plausible combinations of components from existing shapes. Our primary contribution is a new generative model of component-based shape structure. The model represents probabilistic relationships between properties of shape components, and relates them to learned underlying causes of structural variability within the domain. These causes are treated as latent variables, leading to a compact representation that can be effectively learned without supervision from a set of compatibly segmented shapes. We evaluate the model on a number of shape datasets with complex structural variability and demonstrate its application to amplification of shape databases and to interactive shape synthesis. © 2012 ACM 0730-0301/2012/08-ART55.

  15. Service creation: a model-based approach

    NARCIS (Netherlands)

    Quartel, Dick; van Sinderen, Marten J.; Ferreira Pires, Luis

    1999-01-01

    This paper presents a model-based approach to support service creation. In this approach, services are assumed to be created from (available) software components. The creation process may involve multiple design steps in which the requested service is repeatedly decomposed into more detailed

  16. An ontology for component-based models of water resource systems

    Science.gov (United States)

    Elag, Mostafa; Goodall, Jonathan L.

    2013-08-01

    Component-based modeling is an approach for simulating water resource systems where a model is composed of a set of components, each with a defined modeling objective, interlinked through data exchanges. Component-based modeling frameworks are used within the hydrologic, atmospheric, and earth surface dynamics modeling communities. While these efforts have been advancing, it has become clear that the water resources modeling community in particular, and arguably the larger earth science modeling community as well, faces a challenge of fully and precisely defining the metadata for model components. The lack of a unified framework for model component metadata limits interoperability between modeling communities and the reuse of models across modeling frameworks due to ambiguity about the model and its capabilities. To address this need, we propose an ontology for water resources model components that describes core concepts and relationships using the Web Ontology Language (OWL). The ontology that we present, which is termed the Water Resources Component (WRC) ontology, is meant to serve as a starting point that can be refined over time through engagement by the larger community until a robust knowledge framework for water resource model components is achieved. This paper presents the methodology used to arrive at the WRC ontology, the WRC ontology itself, and examples of how the ontology can aid in component-based water resources modeling by (i) assisting in identifying relevant models, (ii) encouraging proper model coupling, and (iii) facilitating interoperability across earth science modeling frameworks.

  17. Component-Based Approach in Learning Management System Development

    Science.gov (United States)

    Zaitseva, Larisa; Bule, Jekaterina; Makarov, Sergey

    2013-01-01

    The paper describes component-based approach (CBA) for learning management system development. Learning object as components of e-learning courses and their metadata is considered. The architecture of learning management system based on CBA being developed in Riga Technical University, namely its architecture, elements and possibilities are…

  18. A meta-model based approach for rapid formability estimation of continuous fibre reinforced components

    Science.gov (United States)

    Zimmerling, Clemens; Dörr, Dominik; Henning, Frank; Kärger, Luise

    2018-05-01

    Due to their high mechanical performance, continuous fibre reinforced plastics (CoFRP) become increasingly important for load bearing structures. In many cases, manufacturing CoFRPs comprises a forming process of textiles. To predict and optimise the forming behaviour of a component, numerical simulations are applied. However, for maximum part quality, both the geometry and the process parameters must match in mutual regard, which in turn requires numerous numerically expensive optimisation iterations. In both textile and metal forming, a lot of research has focused on determining optimum process parameters, whilst regarding the geometry as invariable. In this work, a meta-model based approach on component level is proposed, that provides a rapid estimation of the formability for variable geometries based on pre-sampled, physics-based draping data. Initially, a geometry recognition algorithm scans the geometry and extracts a set of doubly-curved regions with relevant geometry parameters. If the relevant parameter space is not part of an underlying data base, additional samples via Finite-Element draping simulations are drawn according to a suitable design-table for computer experiments. Time saving parallel runs of the physical simulations accelerate the data acquisition. Ultimately, a Gaussian Regression meta-model is built from the data base. The method is demonstrated on a box-shaped generic structure. The predicted results are in good agreement with physics-based draping simulations. Since evaluations of the established meta-model are numerically inexpensive, any further design exploration (e.g. robustness analysis or design optimisation) can be performed in short time. It is expected that the proposed method also offers great potential for future applications along virtual process chains: For each process step along the chain, a meta-model can be set-up to predict the impact of design variations on manufacturability and part performance. Thus, the method is

  19. A combined Component-Based Approach for the Design of Distributed Software Systems

    NARCIS (Netherlands)

    Guareis de farias, Cléver; Ferreira Pires, Luis; van Sinderen, Marten J.; Quartel, Dick; Yang, H.; Gupta, S.

    2001-01-01

    Component-based software development enables the construction of software artefacts by assembling binary units of production, distribution and deployment, the so-called components. Several approaches to component-based development have been proposed recently. Most of these approaches are based on

  20. Component-based modeling of systems for automated fault tree generation

    International Nuclear Information System (INIS)

    Majdara, Aref; Wakabayashi, Toshio

    2009-01-01

    One of the challenges in the field of automated fault tree construction is to find an efficient modeling approach that can support modeling of different types of systems without ignoring any necessary details. In this paper, we are going to represent a new system of modeling approach for computer-aided fault tree generation. In this method, every system model is composed of some components and different types of flows propagating through them. Each component has a function table that describes its input-output relations. For the components having different operational states, there is also a state transition table. Each component can communicate with other components in the system only through its inputs and outputs. A trace-back algorithm is proposed that can be applied to the system model to generate the required fault trees. The system modeling approach and the fault tree construction algorithm are applied to a fire sprinkler system and the results are presented

  1. Integration of Simulink Models with Component-based Software Models

    DEFF Research Database (Denmark)

    Marian, Nicolae

    2008-01-01

    Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics...... of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical...... constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems) is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI), University of Southern Denmark. Once specified, the software model has...

  2. A Component-Based Modeling and Validation Method for PLC Systems

    Directory of Open Access Journals (Sweden)

    Rui Wang

    2014-05-01

    Full Text Available Programmable logic controllers (PLCs are complex embedded systems that are widely used in industry. This paper presents a component-based modeling and validation method for PLC systems using the behavior-interaction-priority (BIP framework. We designed a general system architecture and a component library for a type of device control system. The control software and hardware of the environment were all modeled as BIP components. System requirements were formalized as monitors. Simulation was carried out to validate the system model. A realistic example from industry of the gates control system was employed to illustrate our strategies. We found a couple of design errors during the simulation, which helped us to improve the dependability of the original systems. The results of experiment demonstrated the effectiveness of our approach.

  3. Towards a Component Based Model for Database Systems

    Directory of Open Access Journals (Sweden)

    Octavian Paul ROTARU

    2004-02-01

    Full Text Available Due to their effectiveness in the design and development of software applications and due to their recognized advantages in terms of reusability, Component-Based Software Engineering (CBSE concepts have been arousing a great deal of interest in recent years. This paper presents and extends a component-based approach to object-oriented database systems (OODB introduced by us in [1] and [2]. Components are proposed as a new abstraction level for database system, logical partitions of the schema. In this context, the scope is introduced as an escalated property for transactions. Components are studied from the integrity, consistency, and concurrency control perspective. The main benefits of our proposed component model for OODB are the reusability of the database design, including the access statistics required for a proper query optimization, and a smooth information exchange. The integration of crosscutting concerns into the component database model using aspect-oriented techniques is also discussed. One of the main goals is to define a method for the assessment of component composition capabilities. These capabilities are restricted by the component’s interface and measured in terms of adaptability, degree of compose-ability and acceptability level. The above-mentioned metrics are extended from database components to generic software components. This paper extends and consolidates into one common view the ideas previously presented by us in [1, 2, 3].[1] Octavian Paul Rotaru, Marian Dobre, Component Aspects in Object Oriented Databases, Proceedings of the International Conference on Software Engineering Research and Practice (SERP’04, Volume II, ISBN 1-932415-29-7, pages 719-725, Las Vegas, NV, USA, June 2004.[2] Octavian Paul Rotaru, Marian Dobre, Mircea Petrescu, Integrity and Consistency Aspects in Component-Oriented Databases, Proceedings of the International Symposium on Innovation in Information and Communication Technology (ISIICT

  4. Integration of Simulink Models with Component-based Software Models

    Directory of Open Access Journals (Sweden)

    MARIAN, N.

    2008-06-01

    Full Text Available Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behavior as a means of computation, communication and constraints, using computational blocks and aggregates for both discrete and continuous behavior, different interconnection and execution disciplines for event-based and time-based controllers, and so on, to encompass the demands to more functionality, at even lower prices, and with opposite constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI, University of Southern Denmark. Once specified, the software model has to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set of MATLAB/Simulink blocks to COMDES software components, both for continuous and discrete behavior, and the transformation of the software system into the S

  5. A component-based open hypermedia approach to integreting structure services

    DEFF Research Database (Denmark)

    Grønbæk, Kaj; Nürnberg, Peter J.; Bucka-Lassen, Dirk

    1999-01-01

    In this paper, we consider the issue of integrating different structure services within a component-based open hypermedia system. We do so by considering the task of collaborative editing, which calls for a variety of different structures traditionally supplied by different structure services. We...... discuss the nature of collaborative editing and how it can be supported by a combination of spatial and navigational hypermedia services. We then present a component-based open hypermedia system architecture and describe various methods of integrating different structure services provided within...... such an architecture. We show the advantages of integration within a component-based framework over other means of integration, highlighting some of the main advantages of the component-based approach to open hypermedia system design and implementation....

  6. Modelling the Cast Component Weight in Hot Chamber Die Casting using Combined Taguchi and Buckingham's π Approach

    Science.gov (United States)

    Singh, Rupinder

    2018-02-01

    Hot chamber (HC) die casting process is one of the most widely used commercial processes for the casting of low temperature metals and alloys. This process gives near-net shape product with high dimensional accuracy. However in actual field environment the best settings of input parameters is often conflicting as the shape and size of the casting changes and one have to trade off among various output parameters like hardness, dimensional accuracy, casting defects, microstructure etc. So for online inspection of the cast components properties (without affecting the production line) the weight measurement has been established as one of the cost effective method (as the difference in weight of sound and unsound casting reflects the possible casting defects) in field environment. In the present work at first stage the effect of three input process parameters (namely: pressure at 2nd phase in HC die casting; metal pouring temperature and die opening time) has been studied for optimizing the cast component weight `W' as output parameter in form of macro model based upon Taguchi L9 OA. After this Buckingham's π approach has been applied on Taguchi based macro model for the development of micro model. This study highlights the Taguchi-Buckingham based combined approach as a case study (for conversion of macro model into micro model) by identification of optimum levels of input parameters (based on Taguchi approach) and development of mathematical model (based on Buckingham's π approach). Finally developed mathematical model can be used for predicting W in HC die casting process with more flexibility. The results of study highlights second degree polynomial equation for predicting cast component weight in HC die casting and suggest that pressure at 2nd stage is one of the most contributing factors for controlling the casting defect/weight of casting.

  7. Modeling fabrication of nuclear components: An integrative approach

    Energy Technology Data Exchange (ETDEWEB)

    Hench, K.W.

    1996-08-01

    Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components in an environment of intense regulation and shrinking budgets. This dissertation presents an integrative two-stage approach to modeling the casting operation for fabrication of nuclear weapon primary components. The first stage optimizes personnel radiation exposure for the casting operation layout by modeling the operation as a facility layout problem formulated as a quadratic assignment problem. The solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units.

  8. Hirabayashi, Satoshi; Kroll, Charles N.; Nowak, David J. 2011. Component-based development and sensitivity analyses of an air pollutant dry deposition model. Environmental Modelling & Software. 26(6): 804-816.

    Science.gov (United States)

    Satoshi Hirabayashi; Chuck Kroll; David Nowak

    2011-01-01

    The Urban Forest Effects-Deposition model (UFORE-D) was developed with a component-based modeling approach. Functions of the model were separated into components that are responsible for user interface, data input/output, and core model functions. Taking advantage of the component-based approach, three UFORE-D applications were developed: a base application to estimate...

  9. Design and Application of an Ontology for Component-Based Modeling of Water Systems

    Science.gov (United States)

    Elag, M.; Goodall, J. L.

    2012-12-01

    Many Earth system modeling frameworks have adopted an approach of componentizing models so that a large model can be assembled by linking a set of smaller model components. These model components can then be more easily reused, extended, and maintained by a large group of model developers and end users. While there has been a notable increase in component-based model frameworks in the Earth sciences in recent years, there has been less work on creating framework-agnostic metadata and ontologies for model components. Well defined model component metadata is needed, however, to facilitate sharing, reuse, and interoperability both within and across Earth system modeling frameworks. To address this need, we have designed an ontology for the water resources community named the Water Resources Component (WRC) ontology in order to advance the application of component-based modeling frameworks across water related disciplines. Here we present the design of the WRC ontology and demonstrate its application for integration of model components used in watershed management. First we show how the watershed modeling system Soil and Water Assessment Tool (SWAT) can be decomposed into a set of hydrological and ecological components that adopt the Open Modeling Interface (OpenMI) standard. Then we show how the components can be used to estimate nitrogen losses from land to surface water for the Baltimore Ecosystem study area. Results of this work are (i) a demonstration of how the WRC ontology advances the conceptual integration between components of water related disciplines by handling the semantic and syntactic heterogeneity present when describing components from different disciplines and (ii) an investigation of a methodology by which large models can be decomposed into a set of model components that can be well described by populating metadata according to the WRC ontology.

  10. Model validation and calibration based on component functions of model output

    International Nuclear Information System (INIS)

    Wu, Danqing; Lu, Zhenzhou; Wang, Yanping; Cheng, Lei

    2015-01-01

    The target in this work is to validate the component functions of model output between physical observation and computational model with the area metric. Based on the theory of high dimensional model representations (HDMR) of independent input variables, conditional expectations are component functions of model output, and the conditional expectations reflect partial information of model output. Therefore, the model validation of conditional expectations tells the discrepancy between the partial information of the computational model output and that of the observations. Then a calibration of the conditional expectations is carried out to reduce the value of model validation metric. After that, a recalculation of the model validation metric of model output is taken with the calibrated model parameters, and the result shows that a reduction of the discrepancy in the conditional expectations can help decrease the difference in model output. At last, several examples are employed to demonstrate the rationality and necessity of the methodology in case of both single validation site and multiple validation sites. - Highlights: • A validation metric of conditional expectations of model output is proposed. • HDRM explains the relationship of conditional expectations and model output. • An improved approach of parameter calibration updates the computational models. • Validation and calibration process are applied at single site and multiple sites. • Validation and calibration process show a superiority than existing methods

  11. New component-based normalization method to correct PET system models

    International Nuclear Information System (INIS)

    Kinouchi, Shoko; Miyoshi, Yuji; Suga, Mikio; Yamaya, Taiga; Yoshida, Eiji; Nishikido, Fumihiko; Tashima, Hideaki

    2011-01-01

    Normalization correction is necessary to obtain high-quality reconstructed images in positron emission tomography (PET). There are two basic types of normalization methods: the direct method and component-based methods. The former method suffers from the problem that a huge count number in the blank scan data is required. Therefore, the latter methods have been proposed to obtain high statistical accuracy normalization coefficients with a small count number in the blank scan data. In iterative image reconstruction methods, on the other hand, the quality of the obtained reconstructed images depends on the system modeling accuracy. Therefore, the normalization weighing approach, in which normalization coefficients are directly applied to the system matrix instead of a sinogram, has been proposed. In this paper, we propose a new component-based normalization method to correct system model accuracy. In the proposed method, two components are defined and are calculated iteratively in such a way as to minimize errors of system modeling. To compare the proposed method and the direct method, we applied both methods to our small OpenPET prototype system. We achieved acceptable statistical accuracy of normalization coefficients while reducing the count number of the blank scan data to one-fortieth that required in the direct method. (author)

  12. An Approach to Quality Estimation in Model-Based Development

    DEFF Research Database (Denmark)

    Holmegaard, Jens Peter; Koch, Peter; Ravn, Anders Peter

    2004-01-01

    We present an approach to estimation of parameters for design space exploration in Model-Based Development, where synthesis of a system is done in two stages. Component qualities like space, execution time or power consumption are defined in a repository by platform dependent values. Connectors...

  13. New approaches to the modelling of multi-component fuel droplet heating and evaporation

    KAUST Repository

    Sazhin, Sergei S

    2015-02-25

    The previously suggested quasi-discrete model for heating and evaporation of complex multi-component hydrocarbon fuel droplets is described. The dependence of density, viscosity, heat capacity and thermal conductivity of liquid components on carbon numbers n and temperatures is taken into account. The effects of temperature gradient and quasi-component diffusion inside droplets are taken into account. The analysis is based on the Effective Thermal Conductivity/Effective Diffusivity (ETC/ED) model. This model is applied to the analysis of Diesel and gasoline fuel droplet heating and evaporation. The components with relatively close n are replaced by quasi-components with properties calculated as average properties of the a priori defined groups of actual components. Thus the analysis of the heating and evaporation of droplets consisting of many components is replaced with the analysis of the heating and evaporation of droplets consisting of relatively few quasi-components. It is demonstrated that for Diesel and gasoline fuel droplets the predictions of the model based on five quasi-components are almost indistinguishable from the predictions of the model based on twenty quasi-components for Diesel fuel droplets and are very close to the predictions of the model based on thirteen quasi-components for gasoline fuel droplets. It is recommended that in the cases of both Diesel and gasoline spray combustion modelling, the analysis of droplet heating and evaporation is based on as little as five quasi-components.

  14. Prioritizing the refactoring need for critical component using combined approach

    Directory of Open Access Journals (Sweden)

    Rajni Sehgal

    2018-10-01

    Full Text Available One of the most promising strategies that will smooth out the maintainability issues of the software is refactoring. Due to lack of proper design approach, the code often inherits some bad smells which may lead to improper functioning of the code, especially when it is subject to change and requires some maintenance. A lot of studies have been performed to optimize the refactoring strategy which is also a very expensive process. In this paper, a component based system is considered, and a Fuzzy Multi Criteria Decision Making (FMCDM model is proposed by combining subjective and objective weights to rank the components as per their urgency of refactoring. Jdeodorant tool is used to detect the code smells from the individual components of a software system. The objective method uses the Entropy approach to rank the component having the code smell. The subjective method uses the Fuzzy TOPSIS approach based on decision makers’ judgement, to identify the critically and dependency of these code smells on the overall software. The suggested approach is implemented on component-based software having 15 components. The constitute components are ranked based on refactoring requirements.

  15. Energetic Variational Approach to Multi-Component Fluid Flows

    Science.gov (United States)

    Kirshtein, Arkadz; Liu, Chun; Brannick, James

    2017-11-01

    In this talk I will introduce the systematic energetic variational approach for dissipative systems applied to multi-component fluid flows. These variational approaches are motivated by the seminal works of Rayleigh and Onsager. The advantage of this approach is that we have to postulate only energy law and some kinematic relations based on fundamental physical principles. The method gives a clear, quick and consistent way to derive the PDE system. I will compare different approaches to three-component flows using diffusive interface method and discuss their advantages and disadvantages. The diffusive interface method is an approach for modeling interactions among complex substances. The main idea behind this method is to introduce phase field labeling functions in order to model the contact line by smooth change from one type of material to another. The work of Arkadz Kirshtein and Chun Liu is partially supported by NSF Grants DMS-141200 and DMS-1216938.

  16. Component-oriented approach to the development and use of numerical models in high energy physics

    International Nuclear Information System (INIS)

    Amelin, N.S.; Komogorov, M.Eh.

    2002-01-01

    We discuss the main concepts of a component approach to the development and use of numerical models in high energy physics. This approach is realized as the NiMax software system. The discussed concepts are illustrated by numerous examples of the system user session. In appendix chapter we describe physics and numerical algorithms of the model components to perform simulation of hadronic and nuclear collisions at high energies. These components are members of hadronic application modules that have been developed with the help of the NiMax system. Given report is served as an early release of the NiMax manual mainly for model component users

  17. Issues and approaches in risk-based aging analyses of passive components

    International Nuclear Information System (INIS)

    Uryasev, S.P.; Samanta, P.K.; Vesely, W.E.

    1994-01-01

    In previous NRC-sponsored work a general methodology was developed to quantify the risk contributions from aging components at nuclear plants. The methodology allowed Probabilistic Risk Analyses (PRAs) to be modified to incorporate the age-dependent component failure rates and also aging maintenance models to evaluate and prioritize the aging contributions from active components using the linear aging failure rate model and empirical components aging rates. In the present paper, this methodology is extended to passive components (for example, the pipes, heat exchangers, and the vessel). The analyses of passive components bring in issues different from active components. Here, we specifically focus on three aspects that need to be addressed in risk-based aging prioritization of passive components

  18. Principal component approach in variance component estimation for international sire evaluation

    Directory of Open Access Journals (Sweden)

    Jakobsen Jette

    2011-05-01

    Full Text Available Abstract Background The dairy cattle breeding industry is a highly globalized business, which needs internationally comparable and reliable breeding values of sires. The international Bull Evaluation Service, Interbull, was established in 1983 to respond to this need. Currently, Interbull performs multiple-trait across country evaluations (MACE for several traits and breeds in dairy cattle and provides international breeding values to its member countries. Estimating parameters for MACE is challenging since the structure of datasets and conventional use of multiple-trait models easily result in over-parameterized genetic covariance matrices. The number of parameters to be estimated can be reduced by taking into account only the leading principal components of the traits considered. For MACE, this is readily implemented in a random regression model. Methods This article compares two principal component approaches to estimate variance components for MACE using real datasets. The methods tested were a REML approach that directly estimates the genetic principal components (direct PC and the so-called bottom-up REML approach (bottom-up PC, in which traits are sequentially added to the analysis and the statistically significant genetic principal components are retained. Furthermore, this article evaluates the utility of the bottom-up PC approach to determine the appropriate rank of the (covariance matrix. Results Our study demonstrates the usefulness of both approaches and shows that they can be applied to large multi-country models considering all concerned countries simultaneously. These strategies can thus replace the current practice of estimating the covariance components required through a series of analyses involving selected subsets of traits. Our results support the importance of using the appropriate rank in the genetic (covariance matrix. Using too low a rank resulted in biased parameter estimates, whereas too high a rank did not result in

  19. Implementing components of the routines-based model

    OpenAIRE

    McWilliam, Robin; Fernández Valero, Rosa

    2015-01-01

    The MBR is comprised of 17 components that can generally be grouped into practices related to (a) functional assessment and intervention planning (for example, Routines-Based Interview), (b) organization of services (including location and staffing), (c) service delivery to children and families (using a consultative approach with families and teachers, integrated therapy), (d) classroom organization (for example, classroom zones), and (e) supervision and training through ch...

  20. On a model-based approach to radiation protection

    International Nuclear Information System (INIS)

    Waligorski, M.P.R.

    2002-01-01

    There is a preoccupation with linearity and absorbed dose as the basic quantifiers of radiation hazard. An alternative is the fluence approach, whereby radiation hazard may be evaluated, at least in principle, via an appropriate action cross section. In order to compare these approaches, it may be useful to discuss them as quantitative descriptors of survival and transformation-like endpoints in cell cultures in vitro - a system thought to be relevant to modelling radiation hazard. If absorbed dose is used to quantify these biological endpoints, then non-linear dose-effect relations have to be described, and, e.g. after doses of densely ionising radiation, dose-correction factors as high as 20 are required. In the fluence approach only exponential effect-fluence relationships can be readily described. Neither approach alone exhausts the scope of experimentally observed dependencies of effect on dose or fluence. Two-component models, incorporating a suitable mixture of the two approaches, are required. An example of such a model is the cellular track structure theory developed by Katz over thirty years ago. The practical consequences of modelling radiation hazard using this mixed two-component approach are discussed. (author)

  1. A participatory systems approach to modeling social, economic, and ecological components of bioenergy

    International Nuclear Information System (INIS)

    Buchholz, Thomas S.; Volk, Timothy A.; Luzadis, Valerie A.

    2007-01-01

    Availability of and access to useful energy is a crucial factor for maintaining and improving human well-being. Looming scarcities and increasing awareness of environmental, economic, and social impacts of conventional sources of non-renewable energy have focused attention on renewable energy sources, including biomass. The complex interactions of social, economic, and ecological factors among the bioenergy system components of feedstock supply, conversion technology, and energy allocation have been a major obstacle to the broader development of bioenergy systems. For widespread implementation of bioenergy to occur there is a need for an integrated approach to model the social, economic, and ecological interactions associated with bioenergy. Such models can serve as a planning and evaluation tool to help decide when, where, and how bioenergy systems can contribute to development. One approach to integrated modeling is by assessing the sustainability of a bioenergy system. The evolving nature of sustainability can be described by an adaptive systems approach using general systems principles. Discussing these principles reveals that participation of stakeholders in all components of a bioenergy system is a crucial factor for sustainability. Multi-criteria analysis (MCA) is an effective tool to implement this approach. This approach would enable decision-makers to evaluate bioenergy systems for sustainability in a participatory, transparent, timely, and informed manner

  2. Metric-based approach and tool for modeling the I and C system using Markov chains

    International Nuclear Information System (INIS)

    Butenko, Valentyna; Kharchenko, Vyacheslav; Odarushchenko, Elena; Butenko, Dmitriy

    2015-01-01

    Markov's chains (MC) are well-know and widely applied in dependability and performability analysis of safety-critical systems, because of the flexible representation of system components dependencies and synchronization. There are few radblocks for greater application of the MC: accounting the additional system components increases the model state-space and complicates analysis; the non-numerically sophisticated user may find it difficult to decide between the variety of numerical methods to determine the most suitable and accurate for their application. Thus obtaining the high accurate and trusted modeling results becomes a nontrivial task. In this paper, we present the metric-based approach for selection of the applicable solution approach, based on the analysis of MCs stiffness, decomposability, sparsity and fragmentedness. Using this selection procedure the modeler can provide the verification of earlier obtained results. The presented approach was implemented in utility MSMC, which supports the MC construction, metric-based analysis, recommendations shaping and model solution. The model can be exported to the wall-known off-the-shelf mathematical packages for verification. The paper presents the case study of the industrial NPP I and C system, manufactured by RPC Radiy. The paper shows an application of metric-based approach and MSMC fool for dependability and safety analysis of RTS, and procedure of results verification. (author)

  3. Problem-Oriented Corporate Knowledge Base Models on the Case-Based Reasoning Approach Basis

    Science.gov (United States)

    Gluhih, I. N.; Akhmadulin, R. K.

    2017-07-01

    One of the urgent directions of efficiency enhancement of production processes and enterprises activities management is creation and use of corporate knowledge bases. The article suggests a concept of problem-oriented corporate knowledge bases (PO CKB), in which knowledge is arranged around possible problem situations and represents a tool for making and implementing decisions in such situations. For knowledge representation in PO CKB a case-based reasoning approach is encouraged to use. Under this approach, the content of a case as a knowledge base component has been defined; based on the situation tree a PO CKB knowledge model has been developed, in which the knowledge about typical situations as well as specific examples of situations and solutions have been represented. A generalized problem-oriented corporate knowledge base structural chart and possible modes of its operation have been suggested. The obtained models allow creating and using corporate knowledge bases for support of decision making and implementing, training, staff skill upgrading and analysis of the decisions taken. The universal interpretation of terms “situation” and “solution” adopted in the work allows using the suggested models to develop problem-oriented corporate knowledge bases in different subject domains. It has been suggested to use the developed models for making corporate knowledge bases of the enterprises that operate engineer systems and networks at large production facilities.

  4. Component Composition Using Feature Models

    DEFF Research Database (Denmark)

    Eichberg, Michael; Klose, Karl; Mitschke, Ralf

    2010-01-01

    interface description languages. If this variability is relevant when selecting a matching component then human interaction is required to decide which components can be bound. We propose to use feature models for making this variability explicit and (re-)enabling automatic component binding. In our...... approach, feature models are one part of service specifications. This enables to declaratively specify which service variant is provided by a component. By referring to a service's variation points, a component that requires a specific service can list the requirements on the desired variant. Using...... these specifications, a component environment can then determine if a binding of the components exists that satisfies all requirements. The prototypical environment Columbus demonstrates the feasibility of the approach....

  5. Component-Based Cartoon Face Generation

    Directory of Open Access Journals (Sweden)

    Saman Sepehri Nejad

    2016-11-01

    Full Text Available In this paper, we present a cartoon face generation method that stands on a component-based facial feature extraction approach. Given a frontal face image as an input, our proposed system has the following stages. First, face features are extracted using an extended Active Shape Model. Outlines of the components are locally modified using edge detection, template matching and Hermit interpolation. This modification enhances the diversity of output and accuracy of the component matching required for cartoon generation. Second, to bring cartoon-specific features such as shadows, highlights and, especially, stylish drawing, an array of various face photographs and corresponding hand-drawn cartoon faces are collected. These cartoon templates are automatically decomposed into cartoon components using our proposed method for parameterizing cartoon samples, which is fast and simple. Then, using shape matching methods, the appropriate cartoon component is selected and deformed to fit the input face. Finally, a cartoon face is rendered in a vector format using the rendering rules of the selected template. Experimental results demonstrate effectiveness of our approach in generating life-like cartoon faces.

  6. Component based modelling of piezoelectric ultrasonic actuators for machining applications

    International Nuclear Information System (INIS)

    Saleem, A; Ahmed, N; Salah, M; Silberschmidt, V V

    2013-01-01

    Ultrasonically Assisted Machining (UAM) is an emerging technology that has been utilized to improve the surface finishing in machining processes such as turning, milling, and drilling. In this context, piezoelectric ultrasonic transducers are being used to vibrate the cutting tip while machining at predetermined amplitude and frequency. However, modelling and simulation of these transducers is a tedious and difficult task. This is due to the inherent nonlinearities associated with smart materials. Therefore, this paper presents a component-based model of ultrasonic transducers that mimics the nonlinear behaviour of such a system. The system is decomposed into components, a mathematical model of each component is created, and the whole system model is accomplished by aggregating the basic components' model. System parameters are identified using Finite Element technique which then has been used to simulate the system in Matlab/SIMULINK. Various operation conditions are tested and performed to demonstrate the system performance

  7. Residual lifetime prediction for lithium-ion battery based on functional principal component analysis and Bayesian approach

    International Nuclear Information System (INIS)

    Cheng, Yujie; Lu, Chen; Li, Tieying; Tao, Laifa

    2015-01-01

    Existing methods for predicting lithium-ion (Li-ion) battery residual lifetime mostly depend on a priori knowledge on aging mechanism, the use of chemical or physical formulation and analytical battery models. This dependence is usually difficult to determine in practice, which restricts the application of these methods. In this study, we propose a new prediction method for Li-ion battery residual lifetime evaluation based on FPCA (functional principal component analysis) and Bayesian approach. The proposed method utilizes FPCA to construct a nonparametric degradation model for Li-ion battery, based on which the residual lifetime and the corresponding confidence interval can be evaluated. Furthermore, an empirical Bayes approach is utilized to achieve real-time updating of the degradation model and concurrently determine residual lifetime distribution. Based on Bayesian updating, a more accurate prediction result and a more precise confidence interval are obtained. Experiments are implemented based on data provided by the NASA Ames Prognostics Center of Excellence. Results confirm that the proposed prediction method performs well in real-time battery residual lifetime prediction. - Highlights: • Capacity is considered functional and FPCA is utilized to extract more information. • No features required which avoids drawbacks induced by feature extraction. • A good combination of both population and individual information. • Avoiding complex aging mechanism and accurate analytical models of batteries. • Easily applicable to different batteries for life prediction and RLD calculation.

  8. Probabilistic reasoning for assembly-based 3D modeling

    KAUST Repository

    Chaudhuri, Siddhartha

    2011-01-01

    Assembly-based modeling is a promising approach to broadening the accessibility of 3D modeling. In assembly-based modeling, new models are assembled from shape components extracted from a database. A key challenge in assembly-based modeling is the identification of relevant components to be presented to the user. In this paper, we introduce a probabilistic reasoning approach to this problem. Given a repository of shapes, our approach learns a probabilistic graphical model that encodes semantic and geometric relationships among shape components. The probabilistic model is used to present components that are semantically and stylistically compatible with the 3D model that is being assembled. Our experiments indicate that the probabilistic model increases the relevance of presented components. © 2011 ACM.

  9. Component- and system-level degradation modeling of digital Instrumentation and Control systems based on a Multi-State Physics Modeling Approach

    International Nuclear Information System (INIS)

    Wang, Wei; Di Maio, Francesco; Zio, Enrico

    2016-01-01

    Highlights: • A Multi-State Physics Modeling (MSPM) framework for reliability assessment is proposed. • Monte Carlo (MC) simulation is utilized to estimate the degradation state probability. • Due account is given to stochastic uncertainty and deterministic degradation progression. • The MSPM framework is applied to the reliability assessment of a digital I&C system. • Results are compared with the results obtained with a Markov Chain Model (MCM). - Abstract: A system-level degradation modeling is proposed for the reliability assessment of digital Instrumentation and Control (I&C) systems in Nuclear Power Plants (NPPs). At the component level, we focus on the reliability assessment of a Resistance Temperature Detector (RTD), which is an important digital I&C component used to guarantee the safe operation of NPPs. A Multi-State Physics Model (MSPM) is built to describe this component degradation progression towards failure and Monte Carlo (MC) simulation is used to estimate the probability of sojourn in any of the previously defined degradation states, by accounting for both stochastic and deterministic processes that affect the degradation progression. The MC simulation relies on an integrated modeling of stochastic processes with deterministic aging of components that results to be fundamental for estimating the joint cumulative probability distribution of finding the component in any of the possible degradation states. The results of the application of the proposed degradation model to a digital I&C system of literature are compared with the results obtained by a Markov Chain Model (MCM). The integrated stochastic-deterministic process here proposed to drive the MC simulation is viable to integrate component-level models into a system-level model that would consider inter-system or/and inter-component dependencies and uncertainties.

  10. Component-Based Development of Runtime Observers in the COMDES Framework

    DEFF Research Database (Denmark)

    Guan, Wei; Li, Gang; Angelov, Christo K.

    2013-01-01

    against formally specified properties. This paper presents a component-based design method for runtime observers in the context of COMDES framework—a component-based framework for distributed embedded system and its supporting tools. Therefore, runtime verification is facilitated by model......Formal verification methods, such as exhaustive model checking, are often infeasible because of high computational complexity. Runtime observers (monitors) provide an alternative, light-weight verification method, which offers a non-exhaustive but still feasible approach to monitor system behavior...

  11. Structural assessment of aerospace components using image processing algorithms and Finite Element models

    DEFF Research Database (Denmark)

    Stamatelos, Dimtrios; Kappatos, Vassilios

    2017-01-01

    Purpose – This paper presents the development of an advanced structural assessment approach for aerospace components (metallic and composites). This work focuses on developing an automatic image processing methodology based on Non Destructive Testing (NDT) data and numerical models, for predicting...... the residual strength of these components. Design/methodology/approach – An image processing algorithm, based on the threshold method, has been developed to process and quantify the geometric characteristics of damages. Then, a parametric Finite Element (FE) model of the damaged component is developed based...... on the inputs acquired from the image processing algorithm. The analysis of the metallic structures is employing the Extended FE Method (XFEM), while for the composite structures the Cohesive Zone Model (CZM) technique with Progressive Damage Modelling (PDM) is used. Findings – The numerical analyses...

  12. Cognitive components underpinning the development of model-based learning.

    Science.gov (United States)

    Potter, Tracey C S; Bryce, Nessa V; Hartley, Catherine A

    2017-06-01

    Reinforcement learning theory distinguishes "model-free" learning, which fosters reflexive repetition of previously rewarded actions, from "model-based" learning, which recruits a mental model of the environment to flexibly select goal-directed actions. Whereas model-free learning is evident across development, recruitment of model-based learning appears to increase with age. However, the cognitive processes underlying the development of model-based learning remain poorly characterized. Here, we examined whether age-related differences in cognitive processes underlying the construction and flexible recruitment of mental models predict developmental increases in model-based choice. In a cohort of participants aged 9-25, we examined whether the abilities to infer sequential regularities in the environment ("statistical learning"), maintain information in an active state ("working memory") and integrate distant concepts to solve problems ("fluid reasoning") predicted age-related improvements in model-based choice. We found that age-related improvements in statistical learning performance did not mediate the relationship between age and model-based choice. Ceiling performance on our working memory assay prevented examination of its contribution to model-based learning. However, age-related improvements in fluid reasoning statistically mediated the developmental increase in the recruitment of a model-based strategy. These findings suggest that gradual development of fluid reasoning may be a critical component process underlying the emergence of model-based learning. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. A hybrid agent-based approach for modeling microbiological systems.

    Science.gov (United States)

    Guo, Zaiyi; Sloot, Peter M A; Tay, Joc Cing

    2008-11-21

    Models for systems biology commonly adopt Differential Equations or Agent-Based modeling approaches for simulating the processes as a whole. Models based on differential equations presuppose phenomenological intracellular behavioral mechanisms, while models based on Multi-Agent approach often use directly translated, and quantitatively less precise if-then logical rule constructs. We propose an extendible systems model based on a hybrid agent-based approach where biological cells are modeled as individuals (agents) while molecules are represented by quantities. This hybridization in entity representation entails a combined modeling strategy with agent-based behavioral rules and differential equations, thereby balancing the requirements of extendible model granularity with computational tractability. We demonstrate the efficacy of this approach with models of chemotaxis involving an assay of 10(3) cells and 1.2x10(6) molecules. The model produces cell migration patterns that are comparable to laboratory observations.

  14. Computational model of precision grip in Parkinson’s disease: A Utility based approach

    Directory of Open Access Journals (Sweden)

    Ankur eGupta

    2013-12-01

    Full Text Available We propose a computational model of Precision Grip (PG performance in normal subjects and Parkinson’s Disease (PD patients. Prior studies on grip force generation in PD patients show an increase in grip force during ON medication and an increase in the variability of the grip force during OFF medication (Fellows et al 1998; Ingvarsson et al 1997. Changes in grip force generation in dopamine-deficient PD conditions strongly suggest contribution of the Basal Ganglia, a deep brain system having a crucial role in translating dopamine signals to decision making. The present approach is to treat the problem of modeling grip force generation as a problem of action selection, which is one of the key functions of the Basal Ganglia. The model consists of two components: 1 the sensory-motor loop component, and 2 the Basal Ganglia component. The sensory-motor loop component converts a reference position and a reference grip force, into lift force and grip force profiles, respectively. These two forces cooperate in grip-lifting a load. The sensory-motor loop component also includes a plant model that represents the interaction between two fingers involved in PG, and the object to be lifted. The Basal Ganglia component is modeled using Reinforcement Learning with the significant difference that the action selection is performed using utility distribution instead of using purely Value-based distribution, thereby incorporating risk-based decision making. The proposed model is able to account for the precision grip results from normal and PD patients accurately (Fellows et. al. 1998; Ingvarsson et. al. 1997. To our knowledge the model is the first model of precision grip in PD conditions.

  15. A Simplified Multipath Component Modeling Approach for High-Speed Train Channel Based on Ray Tracing

    Directory of Open Access Journals (Sweden)

    Jingya Yang

    2017-01-01

    Full Text Available High-speed train (HST communications at millimeter-wave (mmWave band have received a lot of attention due to their numerous high-data-rate applications enabling smart rail mobility. Accurate and effective channel models are always critical to the HST system design, assessment, and optimization. A distinctive feature of the mmWave HST channel is that it is rapidly time-varying. To depict this feature, a geometry-based multipath model is established for the dominant multipath behavior in delay and Doppler domains. Because of insufficient mmWave HST channel measurement with high mobility, the model is developed by a measurement-validated ray tracing (RT simulator. Different from conventional models, the temporal evolution of dominant multipath behavior is characterized by its geometry factor that represents the geometrical relationship of the dominant multipath component (MPC to HST environment. Actually, during each dominant multipath lifetime, its geometry factor is fixed. To statistically model the geometry factor and its lifetime, the dominant MPCs are extracted within each local wide-sense stationary (WSS region and are tracked over different WSS regions to identify its “birth” and “death” regions. Then, complex attenuation of dominant MPC is jointly modeled by its delay and Doppler shift both which are derived from its geometry factor. Finally, the model implementation is verified by comparison between RT simulated and modeled delay and Doppler spreads.

  16. A Model of Yeast Cell-Cycle Regulation Based on a Standard Component Modeling Strategy for Protein Regulatory Networks.

    Directory of Open Access Journals (Sweden)

    Teeraphan Laomettachit

    Full Text Available To understand the molecular mechanisms that regulate cell cycle progression in eukaryotes, a variety of mathematical modeling approaches have been employed, ranging from Boolean networks and differential equations to stochastic simulations. Each approach has its own characteristic strengths and weaknesses. In this paper, we propose a "standard component" modeling strategy that combines advantageous features of Boolean networks, differential equations and stochastic simulations in a framework that acknowledges the typical sorts of reactions found in protein regulatory networks. Applying this strategy to a comprehensive mechanism of the budding yeast cell cycle, we illustrate the potential value of standard component modeling. The deterministic version of our model reproduces the phenotypic properties of wild-type cells and of 125 mutant strains. The stochastic version of our model reproduces the cell-to-cell variability of wild-type cells and the partial viability of the CLB2-dbΔ clb5Δ mutant strain. Our simulations show that mathematical modeling with "standard components" can capture in quantitative detail many essential properties of cell cycle control in budding yeast.

  17. Space-time latent component Modeling of Geo-referenced health data

    OpenAIRE

    Lawson, Andrew B.; Song, Hae-Ryoung; Cai, Bo; Hossain, Md Monir; Huang, Kun

    2010-01-01

    Latent structure models have been proposed in many applications. For space time health data it is often important to be able to find underlying trends in time which are supported by subsets of small areas. Latent structure modeling is one approach to this analysis. This paper presents a mixture-based approach that can be appied to component selction. The analysis of a Georgia ambulatory asthma county level data set is presented and a simulation-based evaluation is made.

  18. Usage of Parameterized Fatigue Spectra and Physics-Based Systems Engineering Models for Wind Turbine Component Sizing: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Parsons, Taylor; Guo, Yi; Veers, Paul; Dykes, Katherine; Damiani, Rick

    2016-01-26

    Software models that use design-level input variables and physics-based engineering analysis for estimating the mass and geometrical properties of components in large-scale machinery can be very useful for analyzing design trade-offs in complex systems. This study uses DriveSE, an OpenMDAO-based drivetrain model that uses stress and deflection criteria to size drivetrain components within a geared, upwind wind turbine. Because a full lifetime fatigue load spectrum can only be defined using computationally-expensive simulations in programs such as FAST, a parameterized fatigue loads spectrum that depends on wind conditions, rotor diameter, and turbine design life has been implemented. The parameterized fatigue spectrum is only used in this paper to demonstrate the proposed fatigue analysis approach. This paper details a three-part investigation of the parameterized approach and a comparison of the DriveSE model with and without fatigue analysis on the main shaft system. It compares loads from three turbines of varying size and determines if and when fatigue governs drivetrain sizing compared to extreme load-driven design. It also investigates the model's sensitivity to shaft material parameters. The intent of this paper is to demonstrate how fatigue considerations in addition to extreme loads can be brought into a system engineering optimization.

  19. Application of Transfer Matrix Approach to Modeling and Decentralized Control of Lattice-Based Structures

    Science.gov (United States)

    Cramer, Nick; Swei, Sean Shan-Min; Cheung, Kenny; Teodorescu, Mircea

    2015-01-01

    This paper presents a modeling and control of aerostructure developed by lattice-based cellular materials/components. The proposed aerostructure concept leverages a building block strategy for lattice-based components which provide great adaptability to varying ight scenarios, the needs of which are essential for in- ight wing shaping control. A decentralized structural control design is proposed that utilizes discrete-time lumped mass transfer matrix method (DT-LM-TMM). The objective is to develop an e ective reduced order model through DT-LM-TMM that can be used to design a decentralized controller for the structural control of a wing. The proposed approach developed in this paper shows that, as far as the performance of overall structural system is concerned, the reduced order model can be as e ective as the full order model in designing an optimal stabilizing controller.

  20. A model-based software development methodology for high-end automotive components

    NARCIS (Netherlands)

    Ravanan, Mahmoud

    2014-01-01

    This report provides a model-based software development methodology for high-end automotive components. The V-model is used as a process model throughout the development of the software platform. It offers a framework that simplifies the relation between requirements, design, implementation,

  1. Nuclear component design ontology building based on ASME codes

    International Nuclear Information System (INIS)

    Bao Shiyi; Zhou Yu; He Shuyan

    2005-01-01

    The adoption of ontology analysis in the study of concept knowledge acquisition and representation for the nuclear component design process based on computer-supported cooperative work (CSCW) makes it possible to share and reuse numerous concept knowledge of multi-disciplinary domains. A practical ontology building method is accordingly proposed based on Protege knowledge model in combination with both top-down and bottom-up approaches together with Formal Concept Analysis (FCA). FCA exhibits its advantages in the way it helps establish and improve taxonomic hierarchy of concepts and resolve concept conflict occurred in modeling multi-disciplinary domains. With Protege-3.0 as the ontology building tool, a nuclear component design ontology based ASME codes is developed by utilizing the ontology building method. The ontology serves as the basis to realize concept knowledge sharing and reusing of nuclear component design. (authors)

  2. Photonic Beamformer Model Based on Analog Fiber-Optic Links’ Components

    International Nuclear Information System (INIS)

    Volkov, V A; Gordeev, D A; Ivanov, S I; Lavrov, A P; Saenko, I I

    2016-01-01

    The model of photonic beamformer for wideband microwave phased array antenna is investigated. The main features of the photonic beamformer model based on true-time-delay technique, DWDM technology and fiber chromatic dispersion are briefly analyzed. The performance characteristics of the key components of photonic beamformer for phased array antenna in the receive mode are examined. The beamformer model composed of the components available on the market of fiber-optic analog communication links is designed and tentatively investigated. Experimental demonstration of the designed model beamforming features includes actual measurement of 5-element microwave linear array antenna far-field patterns in 6-16 GHz frequency range for antenna pattern steering up to 40°. The results of experimental testing show good accordance with the calculation estimates. (paper)

  3. Generic component failure data base

    International Nuclear Information System (INIS)

    Eide, S.A.; Calley, M.B.

    1992-01-01

    This report discusses comprehensive component generic failure data base which has been developed for light water reactor probabilistic risk assessments. The Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR) was used to generate component failure rates. Using this approach, most of the failure rates are based on actual plant data rather then existing estimates

  4. Reliability prediction system based on the failure rate model for electronic components

    International Nuclear Information System (INIS)

    Lee, Seung Woo; Lee, Hwa Ki

    2008-01-01

    Although many methodologies for predicting the reliability of electronic components have been developed, their reliability might be subjective according to a particular set of circumstances, and therefore it is not easy to quantify their reliability. Among the reliability prediction methods are the statistical analysis based method, the similarity analysis method based on an external failure rate database, and the method based on the physics-of-failure model. In this study, we developed a system by which the reliability of electronic components can be predicted by creating a system for the statistical analysis method of predicting reliability most easily. The failure rate models that were applied are MILHDBK- 217F N2, PRISM, and Telcordia (Bellcore), and these were compared with the general purpose system in order to validate the effectiveness of the developed system. Being able to predict the reliability of electronic components from the stage of design, the system that we have developed is expected to contribute to enhancing the reliability of electronic components

  5. Space-time latent component modeling of geo-referenced health data.

    Science.gov (United States)

    Lawson, Andrew B; Song, Hae-Ryoung; Cai, Bo; Hossain, Md Monir; Huang, Kun

    2010-08-30

    Latent structure models have been proposed in many applications. For space-time health data it is often important to be able to find the underlying trends in time, which are supported by subsets of small areas. Latent structure modeling is one such approach to this analysis. This paper presents a mixture-based approach that can be applied to component selection. The analysis of a Georgia ambulatory asthma county-level data set is presented and a simulation-based evaluation is made. Copyright (c) 2010 John Wiley & Sons, Ltd.

  6. Time series modeling by a regression approach based on a latent process.

    Science.gov (United States)

    Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice

    2009-01-01

    Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.

  7. An adaptive neuro fuzzy model for estimating the reliability of component-based software systems

    Directory of Open Access Journals (Sweden)

    Kirti Tyagi

    2014-01-01

    Full Text Available Although many algorithms and techniques have been developed for estimating the reliability of component-based software systems (CBSSs, much more research is needed. Accurate estimation of the reliability of a CBSS is difficult because it depends on two factors: component reliability and glue code reliability. Moreover, reliability is a real-world phenomenon with many associated real-time problems. Soft computing techniques can help to solve problems whose solutions are uncertain or unpredictable. A number of soft computing approaches for estimating CBSS reliability have been proposed. These techniques learn from the past and capture existing patterns in data. The two basic elements of soft computing are neural networks and fuzzy logic. In this paper, we propose a model for estimating CBSS reliability, known as an adaptive neuro fuzzy inference system (ANFIS, that is based on these two basic elements of soft computing, and we compare its performance with that of a plain FIS (fuzzy inference system for different data sets.

  8. Hierarchical Agent-Based Integrated Modelling Approach for Microgrids with Adoption of EVs and HRES

    Directory of Open Access Journals (Sweden)

    Peng Han

    2014-01-01

    Full Text Available The large adoption of electric vehicles (EVs, hybrid renewable energy systems (HRESs, and the increasing of the loads shall bring significant challenges to the microgrid. The methodology to model microgrid with high EVs and HRESs penetrations is the key to EVs adoption assessment and optimized HRESs deployment. However, considering the complex interactions of the microgrid containing massive EVs and HRESs, any previous single modelling approaches are insufficient. Therefore in this paper, the methodology named Hierarchical Agent-based Integrated Modelling Approach (HAIMA is proposed. With the effective integration of the agent-based modelling with other advanced modelling approaches, the proposed approach theoretically contributes to a new microgrid model hierarchically constituted by microgrid management layer, component layer, and event layer. Then the HAIMA further links the key parameters and interconnects them to achieve the interactions of the whole model. Furthermore, HAIMA practically contributes to a comprehensive microgrid operation system, through which the assessment of the proposed model and the impact of the EVs adoption are achieved. Simulations show that the proposed HAIMA methodology will be beneficial for the microgrid study and EV’s operation assessment and shall be further utilized for the energy management, electricity consumption prediction, the EV scheduling control, and HRES deployment optimization.

  9. Modeling a terminology-based electronic nursing record system: an object-oriented approach.

    Science.gov (United States)

    Park, Hyeoun-Ae; Cho, InSook; Byeun, NamSoo

    2007-10-01

    The aim of this study was to present our perspectives on healthcare information analysis at a conceptual level and the lessons learned from our experience with the development of a terminology-based enterprise electronic nursing record system - which was one of components in an EMR system at a tertiary teaching hospital in Korea - using an object-oriented system analysis and design concept. To ensure a systematic approach and effective collaboration, the department of nursing constituted a system modeling team comprising a project manager, systems analysts, user representatives, an object-oriented methodology expert, and healthcare informaticists (including the authors). A rational unified process (RUP) and the Unified Modeling Language were used as a development process and for modeling notation, respectively. From the scenario and RUP approach, user requirements were formulated into use case sets and the sequence of activities in the scenario was depicted in an activity diagram. The structure of the system was presented in a class diagram. This approach allowed us to identify clearly the structural and behavioral states and important factors of a terminology-based ENR system (e.g., business concerns and system design concerns) according to the viewpoints of both domain and technical experts.

  10. Exploring component-based approaches in forest landscape modeling

    Science.gov (United States)

    H. S. He; D. R. Larsen; D. J. Mladenoff

    2002-01-01

    Forest management issues are increasingly required to be addressed in a spatial context, which has led to the development of spatially explicit forest landscape models. The numerous processes, complex spatial interactions, and diverse applications in spatial modeling make the development of forest landscape models difficult for any single research group. New...

  11. Empirical component model to predict the overall performance of heating coils: Calibrations and tests based on manufacturer catalogue data

    International Nuclear Information System (INIS)

    Ruivo, Celestino R.; Angrisani, Giovanni

    2015-01-01

    Highlights: • An empirical model for predicting the performance of heating coils is presented. • Low and high heating capacity cases are used for calibration. • Versions based on several effectiveness correlations are tested. • Catalogue data are considered in approach testing. • The approach is a suitable component model to be used in dynamic simulation tools. - Abstract: A simplified methodology for predicting the overall behaviour of heating coils is presented in this paper. The coil performance is predicted by the ε-NTU method. Usually manufacturers do not provide information about the overall thermal resistance or the geometric details that are required either for the device selection or to apply known empirical correlations for the estimation of the involved thermal resistances. In the present work, heating capacity tables from the manufacturer catalogue are used to calibrate simplified approaches based on the classical theory of heat exchangers, namely the effectiveness method. Only two reference operating cases are required to calibrate each approach. The validity of the simplified approaches is investigated for a relatively high number of operating cases, listed in the technical catalogue of a manufacturer. Four types of coils of three sizes of air handling units are considered. A comparison is conducted between the heating coil capacities provided by the methodology and the values given by the manufacturer catalogue. The results show that several of the proposed approaches are suitable component models to be integrated in dynamic simulation tools of air conditioning systems such as TRNSYS or EnergyPlus

  12. Probabilistic Modeling of Wind Turbine Drivetrain Components

    DEFF Research Database (Denmark)

    Rafsanjani, Hesam Mirzaei

    Wind energy is one of several energy sources in the world and a rapidly growing industry in the energy sector. When placed in offshore or onshore locations, wind turbines are exposed to wave excitations, highly dynamic wind loads and/or the wakes from other wind turbines. Therefore, most components...... in a wind turbine experience highly dynamic and time-varying loads. These components may fail due to wear or fatigue, and this can lead to unplanned shutdown repairs that are very costly. The design by deterministic methods using safety factors is generally unable to account for the many uncertainties. Thus......, a reliability assessment should be based on probabilistic methods where stochastic modeling of failures is performed. This thesis focuses on probabilistic models and the stochastic modeling of the fatigue life of the wind turbine drivetrain. Hence, two approaches are considered for stochastic modeling...

  13. Components in models of learning: Different operationalisations and relations between components

    Directory of Open Access Journals (Sweden)

    Mirkov Snežana

    2013-01-01

    Full Text Available This paper provides the presentation of different operationalisations of components in different models of learning. Special emphasis is on the empirical verifications of relations between components. Starting from the research of congruence between learning motives and strategies, underlying the general model of school learning that comprises different approaches to learning, we have analyzed the empirical verifications of factor structure of instruments containing the scales of motives and learning strategies corresponding to these motives. Considering the problems in the conceptualization of the achievement approach to learning, we have discussed the ways of operational sing the goal orientations and exploring their role in using learning strategies, especially within the model of the regulation of constructive learning processes. This model has served as the basis for researching learning styles that are the combination of a large number of components. Complex relations between the components point to the need for further investigation of the constructs involved in various models. We have discussed the findings and implications of the studies of relations between the components involved in different models, especially between learning motives/goals and learning strategies. We have analyzed the role of regulation in the learning process, whose elaboration, as indicated by empirical findings, can contribute to a more precise operationalisation of certain learning components. [Projekat Ministarstva nauke Republike Srbije, br. 47008: Unapređivanje kvaliteta i dostupnosti obrazovanja u procesima modernizacije Srbije i br. 179034: Od podsticanja inicijative, saradnje i stvaralaštva u obrazovanju do novih uloga i identiteta u društvu

  14. Integration of Simulink Models with Component-based Software Models

    DEFF Research Database (Denmark)

    Marian, Nicolae; Top, Søren

    2008-01-01

    , communication and constraints, using computational blocks and aggregates for both discrete and continuous behaviour, different interconnection and execution disciplines for event-based and time-based controllers, and so on, to encompass the demands to more functionality, at even lower prices, and with opposite...... to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set...... of MATLAB/Simulink blocks to COMDES software components, both for continuous and discrete behaviour, and the transformation of the software system into the S-functions. The general aim of this work is the improvement of multi-disciplinary development of embedded systems with the focus on the relation...

  15. Addressing dependability by applying an approach for model-based risk assessment

    International Nuclear Information System (INIS)

    Gran, Bjorn Axel; Fredriksen, Rune; Thunem, Atoosa P.-J.

    2007-01-01

    This paper describes how an approach for model-based risk assessment (MBRA) can be applied for addressing different dependability factors in a critical application. Dependability factors, such as availability, reliability, safety and security, are important when assessing the dependability degree of total systems involving digital instrumentation and control (I and C) sub-systems. In order to identify risk sources their roles with regard to intentional system aspects such as system functions, component behaviours and intercommunications must be clarified. Traditional risk assessment is based on fault or risk models of the system. In contrast to this, MBRA utilizes success-oriented models describing all intended system aspects, including functional, operational and organizational aspects of the target. The EU-funded CORAS project developed a tool-supported methodology for the application of MBRA in security-critical systems. The methodology has been tried out within the telemedicine and e-commerce areas, and provided through a series of seven trials a sound basis for risk assessments. In this paper the results from the CORAS project are presented, and it is discussed how the approach for applying MBRA meets the needs of a risk-informed Man-Technology-Organization (MTO) model, and how methodology can be applied as a part of a trust case development

  16. Addressing dependability by applying an approach for model-based risk assessment

    Energy Technology Data Exchange (ETDEWEB)

    Gran, Bjorn Axel [Institutt for energiteknikk, OECD Halden Reactor Project, NO-1751 Halden (Norway)]. E-mail: bjorn.axel.gran@hrp.no; Fredriksen, Rune [Institutt for energiteknikk, OECD Halden Reactor Project, NO-1751 Halden (Norway)]. E-mail: rune.fredriksen@hrp.no; Thunem, Atoosa P.-J. [Institutt for energiteknikk, OECD Halden Reactor Project, NO-1751 Halden (Norway)]. E-mail: atoosa.p-j.thunem@hrp.no

    2007-11-15

    This paper describes how an approach for model-based risk assessment (MBRA) can be applied for addressing different dependability factors in a critical application. Dependability factors, such as availability, reliability, safety and security, are important when assessing the dependability degree of total systems involving digital instrumentation and control (I and C) sub-systems. In order to identify risk sources their roles with regard to intentional system aspects such as system functions, component behaviours and intercommunications must be clarified. Traditional risk assessment is based on fault or risk models of the system. In contrast to this, MBRA utilizes success-oriented models describing all intended system aspects, including functional, operational and organizational aspects of the target. The EU-funded CORAS project developed a tool-supported methodology for the application of MBRA in security-critical systems. The methodology has been tried out within the telemedicine and e-commerce areas, and provided through a series of seven trials a sound basis for risk assessments. In this paper the results from the CORAS project are presented, and it is discussed how the approach for applying MBRA meets the needs of a risk-informed Man-Technology-Organization (MTO) model, and how methodology can be applied as a part of a trust case development.

  17. New approaches to the modelling of multi-component fuel droplet heating and evaporation

    KAUST Repository

    Sazhin, Sergei S; Elwardany, Ahmed E; Heikal, Morgan R

    2015-01-01

    numbers n and temperatures is taken into account. The effects of temperature gradient and quasi-component diffusion inside droplets are taken into account. The analysis is based on the Effective Thermal Conductivity/Effective Diffusivity (ETC/ED) model

  18. Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo

    KAUST Repository

    Martinez, Josue G.

    2010-06-01

    The authors consider the analysis of hierarchical longitudinal functional data based upon a functional principal components approach. In contrast to standard frequentist approaches to selecting the number of principal components, the authors do model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order to overcome this, the authors show how to apply Stochastic Approximation Monte Carlo (SAMC) to this problem, a method that has the potential to explore the entire space and does not become trapped in local extrema. The combination of reversible jump methods and SAMC in hierarchical longitudinal functional data is simplified by a polar coordinate representation of the principal components. The approach is easy to implement and does well in simulated data in determining the distribution of the number of principal components, and in terms of its frequentist estimation properties. Empirical applications are also presented.

  19. A model based message passing approach for flexible and scalable home automation controllers

    Energy Technology Data Exchange (ETDEWEB)

    Bienhaus, D. [INNIAS GmbH und Co. KG, Frankenberg (Germany); David, K.; Klein, N.; Kroll, D. [ComTec Kassel Univ., SE Kassel Univ. (Germany); Heerdegen, F.; Jubeh, R.; Zuendorf, A. [Kassel Univ. (Germany). FG Software Engineering; Hofmann, J. [BSC Computer GmbH, Allendorf (Germany)

    2012-07-01

    There is a large variety of home automation systems that are largely proprietary systems from different vendors. In addition, the configuration and administration of home automation systems is frequently a very complex task especially, if more complex functionality shall be achieved. Therefore, an open model for home automation was developed that is especially designed for easy integration of various home automation systems. This solution also provides a simple modeling approach that is inspired by typical home automation components like switches, timers, etc. In addition, a model based technology to achieve rich functionality and usability was implemented. (orig.)

  20. Component Degradation Susceptibilities As The Bases For Modeling Reactor Aging Risk

    International Nuclear Information System (INIS)

    Unwin, Stephen D.; Lowry, Peter P.; Toyooka, Michael Y.

    2010-01-01

    The extension of nuclear power plant operating licenses beyond 60 years in the United States will be necessary if we are to meet national energy needs while addressing the issues of carbon and climate. Characterizing the operating risks associated with aging reactors is problematic because the principal tool for risk-informed decision-making, Probabilistic Risk Assessment (PRA), is not ideally-suited to addressing aging systems. The components most likely to drive risk in an aging reactor - the passives - receive limited treatment in PRA, and furthermore, standard PRA methods are based on the assumption of stationary failure rates: a condition unlikely to be met in an aging system. A critical barrier to modeling passives aging on the wide scale required for a PRA is that there is seldom sufficient field data to populate parametric failure models, and nor is there the availability of practical physics models to predict out-year component reliability. The methodology described here circumvents some of these data and modeling needs by using materials degradation metrics, integrated with conventional PRA models, to produce risk importance measures for specific aging mechanisms and component types. We suggest that these measures have multiple applications, from the risk-screening of components to the prioritization of materials research.

  1. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    analyses and hypothesis tests as a part of the validation step to provide feedback to analysts and modelers. Decisions on how to proceed in making model-based predictions are made based on these analyses together with the application requirements. Updating modifying and understanding the boundaries associated with the model are also assisted through this feedback. (4) We include a ''model supplement term'' when model problems are indicated. This term provides a (bias) correction to the model so that it will better match the experimental results and more accurately account for uncertainty. Presumably, as the models continue to develop and are used for future applications, the causes for these apparent biases will be identified and the need for this supplementary modeling will diminish. (5) We use a response-modeling approach for our predictions that allows for general types of prediction and for assessment of prediction uncertainty. This approach is demonstrated through a case study supporting the assessment of a weapons response when subjected to a hydrocarbon fuel fire. The foam decomposition model provides an important element of the response of a weapon system in this abnormal thermal environment. Rigid foam is used to encapsulate critical components in the weapon system providing the needed mechanical support as well as thermal isolation. Because the foam begins to decompose at temperatures above 250 C, modeling the decomposition is critical to assessing a weapons response. In the validation analysis it is indicated that the model tends to ''exaggerate'' the effect of temperature changes when compared to the experimental results. The data, however, are too few and to restricted in terms of experimental design to make confident statements regarding modeling problems. For illustration, we assume these indications are correct and compensate for this apparent bias by constructing a model supplement term for use in the model-based

  2. Thermomechanical Modeling of Sintered Silver - A Fracture Mechanics-based Approach: Extended Abstract: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Paret, Paul P [National Renewable Energy Laboratory (NREL), Golden, CO (United States); DeVoto, Douglas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Narumanchi, Sreekant V [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-01

    Sintered silver has proven to be a promising candidate for use as a die-attach and substrate-attach material in automotive power electronics components. It holds promise of greater reliability than lead-based and lead-free solders, especially at higher temperatures (less than 200 degrees Celcius). Accurate predictive lifetime models of sintered silver need to be developed and its failure mechanisms thoroughly characterized before it can be deployed as a die-attach or substrate-attach material in wide-bandgap device-based packages. We present a finite element method (FEM) modeling methodology that can offer greater accuracy in predicting the failure of sintered silver under accelerated thermal cycling. A fracture mechanics-based approach is adopted in the FEM model, and J-integral/thermal cycle values are computed. In this paper, we outline the procedures for obtaining the J-integral/thermal cycle values in a computational model and report on the possible advantage of using these values as modeling parameters in a predictive lifetime model.

  3. A Hybrid Generalized Hidden Markov Model-Based Condition Monitoring Approach for Rolling Bearings.

    Science.gov (United States)

    Liu, Jie; Hu, Youmin; Wu, Bo; Wang, Yan; Xie, Fengyun

    2017-05-18

    The operating condition of rolling bearings affects productivity and quality in the rotating machine process. Developing an effective rolling bearing condition monitoring approach is critical to accurately identify the operating condition. In this paper, a hybrid generalized hidden Markov model-based condition monitoring approach for rolling bearings is proposed, where interval valued features are used to efficiently recognize and classify machine states in the machine process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition (VMD). Parameters of the VMD, in the form of generalized intervals, provide a concise representation for aleatory and epistemic uncertainty and improve the robustness of identification. The multi-scale permutation entropy method is applied to extract state features from the decomposed signals in different operating conditions. Traditional principal component analysis is adopted to reduce feature size and computational cost. With the extracted features' information, the generalized hidden Markov model, based on generalized interval probability, is used to recognize and classify the fault types and fault severity levels. Finally, the experiment results show that the proposed method is effective at recognizing and classifying the fault types and fault severity levels of rolling bearings. This monitoring method is also efficient enough to quantify the two uncertainty components.

  4. A Hybrid Generalized Hidden Markov Model-Based Condition Monitoring Approach for Rolling Bearings

    Directory of Open Access Journals (Sweden)

    Jie Liu

    2017-05-01

    Full Text Available The operating condition of rolling bearings affects productivity and quality in the rotating machine process. Developing an effective rolling bearing condition monitoring approach is critical to accurately identify the operating condition. In this paper, a hybrid generalized hidden Markov model-based condition monitoring approach for rolling bearings is proposed, where interval valued features are used to efficiently recognize and classify machine states in the machine process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition (VMD. Parameters of the VMD, in the form of generalized intervals, provide a concise representation for aleatory and epistemic uncertainty and improve the robustness of identification. The multi-scale permutation entropy method is applied to extract state features from the decomposed signals in different operating conditions. Traditional principal component analysis is adopted to reduce feature size and computational cost. With the extracted features’ information, the generalized hidden Markov model, based on generalized interval probability, is used to recognize and classify the fault types and fault severity levels. Finally, the experiment results show that the proposed method is effective at recognizing and classifying the fault types and fault severity levels of rolling bearings. This monitoring method is also efficient enough to quantify the two uncertainty components.

  5. Research on development model of nuclear component based on life cycle management

    International Nuclear Information System (INIS)

    Bao Shiyi; Zhou Yu; He Shuyan

    2005-01-01

    At present the development process of nuclear component, even nuclear component itself, is more and more supported by computer technology. This increasing utilization of the computer and software has led to the faster development of nuclear technology on one hand and also brought new problems on the other hand. Especially, the combination of hardware, software and humans has increased nuclear component system complexities to an unprecedented level. To solve this problem, Life Cycle Management technology is adopted in nuclear component system. Hence, an intensive discussion on the development process of a nuclear component is proposed. According to the characteristics of the nuclear component development, such as the complexities and strict safety requirements of the nuclear components, long-term design period, changeable design specifications and requirements, high capital investment, and satisfaction for engineering codes/standards, the development life-cycle model of nuclear component is presented. The development life-cycle model is classified at three levels, namely, component level development life-cycle, sub-component development life-cycle and component level verification/certification life-cycle. The purposes and outcomes of development processes are stated in detailed. A process framework for nuclear component based on system engineering and development environment of nuclear component is discussed for future research work. (authors)

  6. Evolutionary modeling-based approach for model errors correction

    Directory of Open Access Journals (Sweden)

    S. Q. Wan

    2012-08-01

    Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."

    On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  7. Blood component therapy in anesthesia and intensive care: Adoption of evidence based approaches

    Directory of Open Access Journals (Sweden)

    Sukhminder Jit Singh Bajwa

    2014-01-01

    Full Text Available Transfusion of blood and its components has undergone technological advancement, and its use is increasing both perioperatively as well as in the Intensive Care Unit. The separation of blood into its various components has made it very economical as blood donated from a single donor can be utilized for many recipients at the same time. However, the transfusion of blood and its components do carry the inherent risk of various transfusion reactions as well as transmission of infections. The indications for transfusion should be strictly adhered to for preventing nonjudicious use. The health care persons involved in transfusion should be well aware of implications of the mismatched transfusion and should be able to provide treatment if such mishaps do occur. A health care professional should carefully weigh the benefits of blood transfusion against the risks involved before subjecting the patients to the transfusion. This manuscript aims to comprehensively review the current evidence based approaches in blood and component transfusion which are being followed in anesthesiology and intensive care practice.

  8. Microservices as an Evolutionary Architecture of Component-Based Development: A Think-aloud Study

    OpenAIRE

    Parizi, Reza M.

    2018-01-01

    Microservices become a fast growing and popular architectural style based on service-oriented development. One of the major advantages using component-based approaches is to support reuse. In this paper, we present a study of microservices and how these systems are related to the traditional abstract models of component-based systems. This research focuses on the core properties of microservices including their scalability, availability and resilience, consistency, coupling and cohesion, and ...

  9. Surface inspection system for industrial components based on shape from shading minimization approach

    Science.gov (United States)

    Kotan, Muhammed; Öz, Cemil

    2017-12-01

    An inspection system using estimated three-dimensional (3-D) surface characteristics information to detect and classify the faults to increase the quality control on the frequently used industrial components is proposed. Shape from shading (SFS) is one of the basic and classic 3-D shape recovery problems in computer vision. In our application, we developed a system using Frankot and Chellappa SFS method based on the minimization of the selected basis function. First, the specialized image acquisition system captured the images of the component. To eliminate noise, wavelet transform is applied to the taken images. Then, estimated gradients were used to obtain depth and surface profiles. Depth information was used to determine and classify the surface defects. Also, a comparison made with some linearization-based SFS algorithms was discussed. The developed system was applied to real products and the results indicated that using SFS approaches is useful and various types of defects can easily be detected in a short period of time.

  10. Unblockable Compositions of Software Components

    DEFF Research Database (Denmark)

    Dong, Ruzhen; Faber, Johannes; Liu, Zhiming

    2012-01-01

    We present a new automata-based interface model describing the interaction behavior of software components. Contrary to earlier component- or interface-based approaches, the interface model we propose specifies all the non-blockable interaction behaviors of a component with any environment...... composition of interface models preserves unblockable sequences of provided services....

  11. Aircraft operational reliability—A model-based approach and a case study

    International Nuclear Information System (INIS)

    Tiassou, Kossi; Kanoun, Karama; Kaâniche, Mohamed; Seguin, Christel; Papadopoulos, Chris

    2013-01-01

    The success of an aircraft mission is subject to the fulfillment of some operational requirements before and during each flight. As these requirements depend essentially on the aircraft system components and the mission profile, the effects of failures can be very severe if they are not anticipated. Hence, one should be able to assess the aircraft operational reliability with regard to its missions in order to be able to cope with failures. We address aircraft operational reliability modeling to support maintenance planning during the mission achievement. We develop a modeling approach, based on a meta-model that is used as a basis: (i) to structure the information needed to assess aircraft operational reliability and (ii) to build a stochastic model that can be tuned dynamically, in order to take into account the aircraft system operational state, a mission profile and the maintenance facilities available at the flight stop locations involved in the mission. The aim is to enable operational reliability assessment online. A case study, based on an aircraft subsystem, is considered for illustration using the Stochastic Activity Networks (SANs) formalism

  12. Principal components analysis of an evaluation of the hemiplegic subject based on the Bobath approach.

    Science.gov (United States)

    Corriveau, H; Arsenault, A B; Dutil, E; Lepage, Y

    1992-01-01

    An evaluation based on the Bobath approach to treatment has previously been developed and partially validated. The purpose of the present study was to verify the content validity of this evaluation with the use of a statistical approach known as principal components analysis. Thirty-eight hemiplegic subjects participated in the study. Analysis of the scores on each of six parameters (sensorium, active movements, muscle tone, reflex activity, postural reactions, and pain) was evaluated on three occasions across a 2-month period. Each time this produced three factors that contained 70% of the variation in the data set. The first component mainly reflected variations in mobility, the second mainly variations in muscle tone, and the third mainly variations in sensorium and pain. The results of such exploratory analysis highlight the fact that some of the parameters are not only important but also interrelated. These results seem to partially support the conceptual framework substantiating the Bobath approach to treatment.

  13. Model based estimation for multi-modal user interface component selection

    CSIR Research Space (South Africa)

    Coetzee, L

    2009-12-01

    Full Text Available and literacy level of the user should be taken into account. This paper presents one approach to develop a cost-based model which can be used to derive appropriate mappings for specific user profiles. The model is explained through a number of small examples...

  14. COTS-based OO-component approach for software inter-operability and reuse (software systems engineering methodology)

    Science.gov (United States)

    Yin, J.; Oyaki, A.; Hwang, C.; Hung, C.

    2000-01-01

    The purpose of this research and study paper is to provide a summary description and results of rapid development accomplishments at NASA/JPL in the area of advanced distributed computing technology using a Commercial-Off--The-Shelf (COTS)-based object oriented component approach to open inter-operable software development and software reuse.

  15. Spatial pattern evaluation of a calibrated national hydrological model - a remote-sensing-based diagnostic approach

    Science.gov (United States)

    Mendiguren, Gorka; Koch, Julian; Stisen, Simon

    2017-11-01

    Distributed hydrological models are traditionally evaluated against discharge stations, emphasizing the temporal and neglecting the spatial component of a model. The present study widens the traditional paradigm by highlighting spatial patterns of evapotranspiration (ET), a key variable at the land-atmosphere interface, obtained from two different approaches at the national scale of Denmark. The first approach is based on a national water resources model (DK-model), using the MIKE-SHE model code, and the second approach utilizes a two-source energy balance model (TSEB) driven mainly by satellite remote sensing data. Ideally, the hydrological model simulation and remote-sensing-based approach should present similar spatial patterns and driving mechanisms of ET. However, the spatial comparison showed that the differences are significant and indicate insufficient spatial pattern performance of the hydrological model.The differences in spatial patterns can partly be explained by the fact that the hydrological model is configured to run in six domains that are calibrated independently from each other, as it is often the case for large-scale multi-basin calibrations. Furthermore, the model incorporates predefined temporal dynamics of leaf area index (LAI), root depth (RD) and crop coefficient (Kc) for each land cover type. This zonal approach of model parameterization ignores the spatiotemporal complexity of the natural system. To overcome this limitation, this study features a modified version of the DK-model in which LAI, RD and Kc are empirically derived using remote sensing data and detailed soil property maps in order to generate a higher degree of spatiotemporal variability and spatial consistency between the six domains. The effects of these changes are analyzed by using empirical orthogonal function (EOF) analysis to evaluate spatial patterns. The EOF analysis shows that including remote-sensing-derived LAI, RD and Kc in the distributed hydrological model adds

  16. Modelling raster-based monthly water balance components for Europe

    Energy Technology Data Exchange (ETDEWEB)

    Ulmen, C.

    2000-11-01

    The terrestrial runoff component is a comparatively small but sensitive and thus significant quantity in the global energy and water cycle at the interface between landmass and atmosphere. As opposed to soil moisture and evapotranspiration which critically determine water vapour fluxes and thus water and energy transport, it can be measured as an integrated quantity over a large area, i.e. the river basin. This peculiarity makes terrestrial runoff ideally suited for the calibration, verification and validation of general circulation models (GCMs). Gauging stations are not homogeneously distributed in space. Moreover, time series are not necessarily continuously measured nor do they in general have overlapping time periods. To overcome this problems with regard to regular grid spacing used in GCMs, different methods can be applied to transform irregular data to regular so called gridded runoff fields. The present work aims to directly compute the gridded components of the monthly water balance (including gridded runoff fields) for Europe by application of the well-established raster-based macro-scale water balance model WABIMON used at the Federal Institute of Hydrology, Germany. Model calibration and validation is performed by separated examination of 29 representative European catchments. Results indicate a general applicability of the model delivering reliable overall patterns and integrated quantities on a monthly basis. For time steps less then too weeks further research and structural improvements of the model are suggested. (orig.)

  17. Blind Separation of Acoustic Signals Combining SIMO-Model-Based Independent Component Analysis and Binary Masking

    Directory of Open Access Journals (Sweden)

    Hiekata Takashi

    2006-01-01

    Full Text Available A new two-stage blind source separation (BSS method for convolutive mixtures of speech is proposed, in which a single-input multiple-output (SIMO-model-based independent component analysis (ICA and a new SIMO-model-based binary masking are combined. SIMO-model-based ICA enables us to separate the mixed signals, not into monaural source signals but into SIMO-model-based signals from independent sources in their original form at the microphones. Thus, the separated signals of SIMO-model-based ICA can maintain the spatial qualities of each sound source. Owing to this attractive property, our novel SIMO-model-based binary masking can be applied to efficiently remove the residual interference components after SIMO-model-based ICA. The experimental results reveal that the separation performance can be considerably improved by the proposed method compared with that achieved by conventional BSS methods. In addition, the real-time implementation of the proposed BSS is illustrated.

  18. Probabilistic reasoning for assembly-based 3D modeling

    KAUST Repository

    Chaudhuri, Siddhartha; Kalogerakis, Evangelos; Guibas, Leonidas; Koltun, Vladlen

    2011-01-01

    Assembly-based modeling is a promising approach to broadening the accessibility of 3D modeling. In assembly-based modeling, new models are assembled from shape components extracted from a database. A key challenge in assembly-based modeling

  19. Empirical projection-based basis-component decomposition method

    Science.gov (United States)

    Brendel, Bernhard; Roessl, Ewald; Schlomka, Jens-Peter; Proksa, Roland

    2009-02-01

    Advances in the development of semiconductor based, photon-counting x-ray detectors stimulate research in the domain of energy-resolving pre-clinical and clinical computed tomography (CT). For counting detectors acquiring x-ray attenuation in at least three different energy windows, an extended basis component decomposition can be performed in which in addition to the conventional approach of Alvarez and Macovski a third basis component is introduced, e.g., a gadolinium based CT contrast material. After the decomposition of the measured projection data into the basis component projections, conventional filtered-backprojection reconstruction is performed to obtain the basis-component images. In recent work, this basis component decomposition was obtained by maximizing the likelihood-function of the measurements. This procedure is time consuming and often unstable for excessively noisy data or low intrinsic energy resolution of the detector. Therefore, alternative procedures are of interest. Here, we introduce a generalization of the idea of empirical dual-energy processing published by Stenner et al. to multi-energy, photon-counting CT raw data. Instead of working in the image-domain, we use prior spectral knowledge about the acquisition system (tube spectra, bin sensitivities) to parameterize the line-integrals of the basis component decomposition directly in the projection domain. We compare this empirical approach with the maximum-likelihood (ML) approach considering image noise and image bias (artifacts) and see that only moderate noise increase is to be expected for small bias in the empirical approach. Given the drastic reduction of pre-processing time, the empirical approach is considered a viable alternative to the ML approach.

  20. Process-based distributed modeling approach for analysis of sediment dynamics in a river basin

    Directory of Open Access Journals (Sweden)

    M. A. Kabir

    2011-04-01

    Full Text Available Modeling of sediment dynamics for developing best management practices of reducing soil erosion and of sediment control has become essential for sustainable management of watersheds. Precise estimation of sediment dynamics is very important since soils are a major component of enormous environmental processes and sediment transport controls lake and river pollution extensively. Different hydrological processes govern sediment dynamics in a river basin, which are highly variable in spatial and temporal scales. This paper presents a process-based distributed modeling approach for analysis of sediment dynamics at river basin scale by integrating sediment processes (soil erosion, sediment transport and deposition with an existing process-based distributed hydrological model. In this modeling approach, the watershed is divided into an array of homogeneous grids to capture the catchment spatial heterogeneity. Hillslope and river sediment dynamic processes have been modeled separately and linked to each other consistently. Water flow and sediment transport at different land grids and river nodes are modeled using one dimensional kinematic wave approximation of Saint-Venant equations. The mechanics of sediment dynamics are integrated into the model using representative physical equations after a comprehensive review. The model has been tested on river basins in two different hydro climatic areas, the Abukuma River Basin, Japan and Latrobe River Basin, Australia. Sediment transport and deposition are modeled using Govers transport capacity equation. All spatial datasets, such as, Digital Elevation Model (DEM, land use and soil classification data, etc., have been prepared using raster "Geographic Information System (GIS" tools. The results of relevant statistical checks (Nash-Sutcliffe efficiency and R–squared value indicate that the model simulates basin hydrology and its associated sediment dynamics reasonably well. This paper presents the

  1. A Co-modeling Method Based on Component Features for Mechatronic Devices in Aero-engines

    Science.gov (United States)

    Wang, Bin; Zhao, Haocen; Ye, Zhifeng

    2017-08-01

    Data-fused and user-friendly design of aero-engine accessories is required because of their structural complexity and stringent reliability. This paper gives an overview of a typical aero-engine control system and the development process of key mechatronic devices used. Several essential aspects of modeling and simulation in the process are investigated. Considering the limitations of a single theoretic model, feature-based co-modeling methodology is suggested to satisfy the design requirements and compensate for diversity of component sub-models for these devices. As an example, a stepper motor controlled Fuel Metering Unit (FMU) is modeled in view of the component physical features using two different software tools. An interface is suggested to integrate the single discipline models into the synthesized one. Performance simulation of this device using the co-model and parameter optimization for its key components are discussed. Comparison between delivery testing and the simulation shows that the co-model for the FMU has a high accuracy and the absolute superiority over a single model. Together with its compatible interface with the engine mathematical model, the feature-based co-modeling methodology is proven to be an effective technical measure in the development process of the device.

  2. Parameter estimation of component reliability models in PSA model of Krsko NPP

    International Nuclear Information System (INIS)

    Jordan Cizelj, R.; Vrbanic, I.

    2001-01-01

    In the paper, the uncertainty analysis of component reliability models for independent failures is shown. The present approach for parameter estimation of component reliability models in NPP Krsko is presented. Mathematical approaches for different types of uncertainty analyses are introduced and used in accordance with some predisposed requirements. Results of the uncertainty analyses are shown in an example for time-related components. As the most appropriate uncertainty analysis proved the Bayesian estimation with the numerical estimation of a posterior, which can be approximated with some appropriate probability distribution, in this paper with lognormal distribution.(author)

  3. Parallel PDE-Based Simulations Using the Common Component Architecture

    International Nuclear Information System (INIS)

    McInnes, Lois C.; Allan, Benjamin A.; Armstrong, Robert; Benson, Steven J.; Bernholdt, David E.; Dahlgren, Tamara L.; Diachin, Lori; Krishnan, Manoj Kumar; Kohl, James A.; Larson, J. Walter; Lefantzi, Sophia; Nieplocha, Jarek; Norris, Boyana; Parker, Steven G.; Ray, Jaideep; Zhou, Shujia

    2006-01-01

    The complexity of parallel PDE-based simulations continues to increase as multimodel, multiphysics, and multi-institutional projects become widespread. A goal of component based software engineering in such large-scale simulations is to help manage this complexity by enabling better interoperability among various codes that have been independently developed by different groups. The Common Component Architecture (CCA) Forum is defining a component architecture specification to address the challenges of high-performance scientific computing. In addition, several execution frameworks, supporting infrastructure, and general purpose components are being developed. Furthermore, this group is collaborating with others in the high-performance computing community to design suites of domain-specific component interface specifications and underlying implementations. This chapter discusses recent work on leveraging these CCA efforts in parallel PDE-based simulations involving accelerator design, climate modeling, combustion, and accidental fires and explosions. We explain how component technology helps to address the different challenges posed by each of these applications, and we highlight how component interfaces built on existing parallel toolkits facilitate the reuse of software for parallel mesh manipulation, discretization, linear algebra, integration, optimization, and parallel data redistribution. We also present performance data to demonstrate the suitability of this approach, and we discuss strategies for applying component technologies to both new and existing applications

  4. A model-data based systems approach to process intensification

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    . Their developments, however, are largely due to experiment based trial and error approaches and while they do not require validation, they can be time consuming and resource intensive. Also, one may ask, can a truly new intensified unit operation be obtained in this way? An alternative two-stage approach is to apply...... a model-based synthesis method to systematically generate and evaluate alternatives in the first stage and an experiment-model based validation in the second stage. In this way, the search for alternatives is done very quickly, reliably and systematically over a wide range, while resources are preserved...... for focused validation of only the promising candidates in the second-stage. This approach, however, would be limited to intensification based on “known” unit operations, unless the PI process synthesis/design is considered at a lower level of aggregation, namely the phenomena level. That is, the model-based...

  5. Intelligent Transportation and Evacuation Planning A Modeling-Based Approach

    CERN Document Server

    Naser, Arab

    2012-01-01

    Intelligent Transportation and Evacuation Planning: A Modeling-Based Approach provides a new paradigm for evacuation planning strategies and techniques. Recently, evacuation planning and modeling have increasingly attracted interest among researchers as well as government officials. This interest stems from the recent catastrophic hurricanes and weather-related events that occurred in the southeastern United States (Hurricane Katrina and Rita). The evacuation methods that were in place before and during the hurricanes did not work well and resulted in thousands of deaths. This book offers insights into the methods and techniques that allow for implementing mathematical-based, simulation-based, and integrated optimization and simulation-based engineering approaches for evacuation planning. This book also: Comprehensively discusses the application of mathematical models for evacuation and intelligent transportation modeling Covers advanced methodologies in evacuation modeling and planning Discusses principles a...

  6. Redundancy allocation problem of a system with increasing failure rates of components based on Weibull distribution: A simulation-based optimization approach

    International Nuclear Information System (INIS)

    Guilani, Pedram Pourkarim; Azimi, Parham; Niaki, S.T.A.; Niaki, Seyed Armin Akhavan

    2016-01-01

    The redundancy allocation problem (RAP) is a useful method to enhance system reliability. In most works involving RAP, failure rates of the system components are assumed to follow either exponential or k-Erlang distributions. In real world problems however, many systems have components with increasing failure rates. This indicates that as time passes by, the failure rates of the system components increase in comparison to their initial failure rates. In this paper, the redundancy allocation problem of a series–parallel system with components having an increasing failure rate based on Weibull distribution is investigated. An optimization method via simulation is proposed for modeling and a genetic algorithm is developed to solve the problem. - Highlights: • The redundancy allocation problem of a series–parallel system is aimed. • Components possess an increasing failure rate based on Weibull distribution. • An optimization method via simulation is proposed for modeling. • A genetic algorithm is developed to solve the problem.

  7. Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria

    Science.gov (United States)

    Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong

    2017-08-01

    In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.

  8. Software Atom: An approach towards software components structuring to improve reusability

    Directory of Open Access Journals (Sweden)

    Muhammad Hussain Mughal

    2017-12-01

    Full Text Available Diversity of application domain compelled to design sustainable classification scheme for significantly amassing software repository. The atomic reusable software components are articulated to improve the software component reusability in volatile industry.  Numerous approaches of software classification have been proposed over past decades. Each approach has some limitations related to coupling and cohesion. In this paper, we proposed a novel approach by constituting the software based on radical functionalities to improve software reusability. We analyze the element's semantics in Periodic Table used in chemistry to design our classification approach, and present this approach using tree-based classification to curtail software repository search space complexity and further refined based on semantic search techniques. We developed a Global unique Identifier (GUID for indexing the functions and related components. We have exploited the correlation between chemistry element and software elements to simulate one to one mapping between them. Our approach is inspired from sustainability chemical periodic table. We have proposed software periodic table (SPT representing atomic software components extracted from real application software. Based on SPT classified repository tree parsing & extraction to enable the user to program their software by customizing the ingredients of software requirements. The classified repository of software ingredients assist user to exploits their requirements to software engineer and enable requirement engineer to develop a rapid large-scale prototype with great essence. Furthermore, we would predict the usability of the categorized repository based on feedback of users.  The continuous evolution of that proposed repository will be fine-tuned based on utilization and SPT would be gradually optimized by ant colony optimization techniques. Succinctly would provoke automating the software development process.

  9. Model-based Prognostics with Concurrent Damage Progression Processes

    Data.gov (United States)

    National Aeronautics and Space Administration — Model-based prognostics approaches rely on physics-based models that describe the behavior of systems and their components. These models must account for the several...

  10. Damage Detection of Refractory Based on Principle Component Analysis and Gaussian Mixture Model

    Directory of Open Access Journals (Sweden)

    Changming Liu

    2018-01-01

    Full Text Available Acoustic emission (AE technique is a common approach to identify the damage of the refractories; however, there is a complex problem since there are as many as fifteen involved parameters, which calls for effective data processing and classification algorithms to reduce the level of complexity. In this paper, experiments involving three-point bending tests of refractories were conducted and AE signals were collected. A new data processing method of merging the similar parameters in the description of the damage and reducing the dimension was developed. By means of the principle component analysis (PCA for dimension reduction, the fifteen related parameters can be reduced to two parameters. The parameters were the linear combinations of the fifteen original parameters and taken as the indexes for damage classification. Based on the proposed approach, the Gaussian mixture model was integrated with the Bayesian information criterion to group the AE signals into two damage categories, which accounted for 99% of all damage. Electronic microscope scanning of the refractories verified the two types of damage.

  11. A Model-Based Prognostics Approach Applied to Pneumatic Valves

    Data.gov (United States)

    National Aeronautics and Space Administration — Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain...

  12. A Model-based Prognostics Approach Applied to Pneumatic Valves

    Data.gov (United States)

    National Aeronautics and Space Administration — Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain...

  13. A Component Approach to Collaborative Scientific Software Development: Tools and Techniques Utilized by the Quantum Chemistry Science Application Partnership

    Directory of Open Access Journals (Sweden)

    Joseph P. Kenny

    2008-01-01

    Full Text Available Cutting-edge scientific computing software is complex, increasingly involving the coupling of multiple packages to combine advanced algorithms or simulations at multiple physical scales. Component-based software engineering (CBSE has been advanced as a technique for managing this complexity, and complex component applications have been created in the quantum chemistry domain, as well as several other simulation areas, using the component model advocated by the Common Component Architecture (CCA Forum. While programming models do indeed enable sound software engineering practices, the selection of programming model is just one building block in a comprehensive approach to large-scale collaborative development which must also address interface and data standardization, and language and package interoperability. We provide an overview of the development approach utilized within the Quantum Chemistry Science Application Partnership, identifying design challenges, describing the techniques which we have adopted to address these challenges and highlighting the advantages which the CCA approach offers for collaborative development.

  14. MODELING OF INVESTMENT STRATEGIES IN STOCKS MARKETS: AN APPROACH FROM MULTI AGENT BASED SIMULATION AND FUZZY LOGIC

    Directory of Open Access Journals (Sweden)

    ALEJANDRO ESCOBAR

    2010-01-01

    Full Text Available This paper presents a simulation model of a complex system, in this case a financial market, using a MultiAgent Based Simulation approach. Such model takes into account microlevel aspects like the Continuous Double Auction mechanism, which is widely used within stock markets, as well as investor agents reasoning who participate looking for profits. To model such reasoning several variables were considered including general stocks information like profitability and volatility, but also some agent's aspects like their risk tendency. All these variables are incorporated throughout a fuzzy logic approach trying to represent in a faithful manner the kind of reasoning that nonexpert investors have, including a stochastic component in order to model human factors.

  15. Secure wireless embedded systems via component-based design

    DEFF Research Database (Denmark)

    Hjorth, T.; Torbensen, R.

    2010-01-01

    This paper introduces the method secure-by-design as a way of constructing wireless embedded systems using component-based modeling frameworks. This facilitates design of secure applications through verified, reusable software. Following this method we propose a security framework with a secure c......, with full support for confidentiality, authentication, and integrity using keypairs. The approach has been demonstrated in a multi-platform home automation prototype that can remotely unlock a door using a PDA over the Internet....

  16. The Methodical Approach to Assessment of Enterprise Activity on the Basis of its Models, Based on the Balanced Scorecard

    Directory of Open Access Journals (Sweden)

    Minenkova Olena V.

    2017-12-01

    Full Text Available The article proposes the methodical approach to assessment of activity of enterprise on the basis of its models, based on the balanced scorecard. The content is presented and the following components of the methodical approach are formed: tasks, input information, list of methods and models, as well as results. Implementation of this methodical approach provides improvement of management and increase of results of enterprise activity. The place of assessment models in management of enterprise activity and formation of managerial decision has been defined. Recommendations as to the operations of decision-making procedures to increase the efficiency of enterprise have been provided.

  17. Rapid Screening of Active Components with an Osteoclastic Inhibitory Effect in Herba epimedii Using Quantitative Pattern–Activity Relationships Based on Joint-Action Models

    Directory of Open Access Journals (Sweden)

    Xiao-Yan Yuan

    2017-10-01

    Full Text Available Screening of bioactive components is important for modernization and quality control of herbal medicines, while the traditional bioassay-guided phytochemical approach is time-consuming and laborious. The presented study proposes a strategy for rapid screening of active components from herbal medicines. As a case study, the quantitative pattern–activity relationship (QPAR between compounds and the osteoclastic inhibitory effect of Herba epimedii, a widely used herbal medicine in China, were investigated based on joint models. For model construction, standard mixtures data showed that the joint-action models are better than the partial least-squares (PLS model. Then, the Good2bad value, which could reflect components’ importance based on Monte Carlo sampling, was coupled with the joint-action models for screening of active components. A compound (baohuoside I and a component composed of compounds with retention times in the 6.9–7.9 min range were selected by our method. Their inhibition rates were higher than icariin, the key bioactive compound in Herba epimedii, which could inhibit osteoclast differentiation and bone resorption in a previous study. Meanwhile, the half-maximal effective concentration, namely, EC50 value of the selected component was 7.54 μg/mL, much smaller than that of baohuoside I—77 μg/mL—which indicated that there is synergistic action between compounds in the selected component. The results clearly show our proposed method is simple and effective in screening the most-bioactive components and compounds, as well as drug-lead components, from herbal medicines.

  18. Two-component multistep direct reactions: A microscopic approach

    International Nuclear Information System (INIS)

    Koning, A.J.; Chadwick, M.B.

    1998-03-01

    The authors present two principal advances in multistep direct theory: (1) A two-component formulation of multistep direct reactions, where neutron and proton excitations are explicitly accounted for in the evolution of the reaction, for all orders of scattering. While this may at first seem to be a formidable task, especially for multistep processes where the many possible reaction pathways becomes large in a two-component formalism, the authors show that this is not so -- a rather simple generalization of the FKK convolution expression 1 automatically generates these pathways. Such considerations are particularly relevant when simultaneously analyzing both neutron and proton emission spectra, which is always important since these processes represent competing decay channels. (2) A new, and fully microscopic, method for calculating MSD cross sections which does not make use of particle-hole state densities but instead directly calculates cross sections for all possible particle-hole excitations (again including an exact book-keeping of the neutron/proton type of the particle and hole at all stages of the reaction) determined from a simple non-interacting shell model. This is in contrast to all previous numerical approaches which sample only a small number of such states to estimate the DWBA strength, and utilize simple analytical formulae for the partial state density, based on the equidistant spacing model. The new approach has been applied, along with theories for multistep compound, compound, and collective reactions, to analyze experimental emission spectra for a range of targets and energies. The authors show that the theory correctly accounts for double-differential nucleon spectra

  19. A Model-Driven Approach to e-Course Management

    Science.gov (United States)

    Savic, Goran; Segedinac, Milan; Milenkovic, Dušica; Hrin, Tamara; Segedinac, Mirjana

    2018-01-01

    This paper presents research on using a model-driven approach to the development and management of electronic courses. We propose a course management system which stores a course model represented as distinct machine-readable components containing domain knowledge of different course aspects. Based on this formally defined platform-independent…

  20. In silico model-based inference: a contemporary approach for hypothesis testing in network biology.

    Science.gov (United States)

    Klinke, David J

    2014-01-01

    Inductive inference plays a central role in the study of biological systems where one aims to increase their understanding of the system by reasoning backwards from uncertain observations to identify causal relationships among components of the system. These causal relationships are postulated from prior knowledge as a hypothesis or simply a model. Experiments are designed to test the model. Inferential statistics are used to establish a level of confidence in how well our postulated model explains the acquired data. This iterative process, commonly referred to as the scientific method, either improves our confidence in a model or suggests that we revisit our prior knowledge to develop a new model. Advances in technology impact how we use prior knowledge and data to formulate models of biological networks and how we observe cellular behavior. However, the approach for model-based inference has remained largely unchanged since Fisher, Neyman and Pearson developed the ideas in the early 1900s that gave rise to what is now known as classical statistical hypothesis (model) testing. Here, I will summarize conventional methods for model-based inference and suggest a contemporary approach to aid in our quest to discover how cells dynamically interpret and transmit information for therapeutic aims that integrates ideas drawn from high performance computing, Bayesian statistics, and chemical kinetics. © 2014 American Institute of Chemical Engineers.

  1. Constructing a justice model based on Sen's capability approach

    OpenAIRE

    Yüksel, Sevgi; Yuksel, Sevgi

    2008-01-01

    The thesis provides a possible justice model based on Sen's capability approach. For this goal, we first analyze the general structure of a theory of justice, identifying the main variables and issues. Furthermore, based on Sen (2006) and Kolm (1998), we look at 'transcendental' and 'comparative' approaches to justice and concentrate on the sufficiency condition for the comparative approach. Then, taking Rawls' theory of justice as a starting point, we present how Sen's capability approach em...

  2. A Novel Haptic Interactive Approach to Simulation of Surgery Cutting Based on Mesh and Meshless Models

    Science.gov (United States)

    Liu, Peter X.; Lai, Pinhua; Xu, Shaoping; Zou, Yanni

    2018-01-01

    In the present work, the majority of implemented virtual surgery simulation systems have been based on either a mesh or meshless strategy with regard to soft tissue modelling. To take full advantage of the mesh and meshless models, a novel coupled soft tissue cutting model is proposed. Specifically, the reconstructed virtual soft tissue consists of two essential components. One is associated with surface mesh that is convenient for surface rendering and the other with internal meshless point elements that is used to calculate the force feedback during cutting. To combine two components in a seamless way, virtual points are introduced. During the simulation of cutting, the Bezier curve is used to characterize smooth and vivid incision on the surface mesh. At the same time, the deformation of internal soft tissue caused by cutting operation can be treated as displacements of the internal point elements. Furthermore, we discussed and proved the stability and convergence of the proposed approach theoretically. The real biomechanical tests verified the validity of the introduced model. And the simulation experiments show that the proposed approach offers high computational efficiency and good visual effect, enabling cutting of soft tissue with high stability. PMID:29850006

  3. A Gaussian mixture copula model based localized Gaussian process regression approach for long-term wind speed prediction

    International Nuclear Information System (INIS)

    Yu, Jie; Chen, Kuilin; Mori, Junichi; Rashid, Mudassir M.

    2013-01-01

    Optimizing wind power generation and controlling the operation of wind turbines to efficiently harness the renewable wind energy is a challenging task due to the intermittency and unpredictable nature of wind speed, which has significant influence on wind power production. A new approach for long-term wind speed forecasting is developed in this study by integrating GMCM (Gaussian mixture copula model) and localized GPR (Gaussian process regression). The time series of wind speed is first classified into multiple non-Gaussian components through the Gaussian mixture copula model and then Bayesian inference strategy is employed to incorporate the various non-Gaussian components using the posterior probabilities. Further, the localized Gaussian process regression models corresponding to different non-Gaussian components are built to characterize the stochastic uncertainty and non-stationary seasonality of the wind speed data. The various localized GPR models are integrated through the posterior probabilities as the weightings so that a global predictive model is developed for the prediction of wind speed. The proposed GMCM–GPR approach is demonstrated using wind speed data from various wind farm locations and compared against the GMCM-based ARIMA (auto-regressive integrated moving average) and SVR (support vector regression) methods. In contrast to GMCM–ARIMA and GMCM–SVR methods, the proposed GMCM–GPR model is able to well characterize the multi-seasonality and uncertainty of wind speed series for accurate long-term prediction. - Highlights: • A novel predictive modeling method is proposed for long-term wind speed forecasting. • Gaussian mixture copula model is estimated to characterize the multi-seasonality. • Localized Gaussian process regression models can deal with the random uncertainty. • Multiple GPR models are integrated through Bayesian inference strategy. • The proposed approach shows higher prediction accuracy and reliability

  4. Ontology-driven approach for describing industrial socio-cyberphysical systems’ components

    Directory of Open Access Journals (Sweden)

    Teslya Nikolay

    2018-01-01

    Full Text Available Nowadays, the concept of the industrial Internet of things is considered by researchers as the basis of Industry 4.0. Its use is aimed at creating a single information space that allows to unite all the components of production, starting from the processed raw materials to the interaction with suppliers and users of completed goods. Such a union will allow to change the established business processes of production to increase the customization of end products for the consumer and to reduce the costs for its producers. Each of the components is described using a digital twin, showing their main characteristics, important for production. The heterogeneity of these characteristics for each of the production levels makes it very difficult to exchange information between them. To solve the problem of interaction between individual components this paper proposes to use the ontological approach to model the components of industrial socio-cyberphysical systems. The paper considers four scenarios of interaction in the industrial Internet of things, based on which the upper-level ontology is formed, which describes the main components of industrial socio-cyberphysical systems and the connections between them.

  5. Modeling the degradation of nuclear components

    International Nuclear Information System (INIS)

    Stock, D.; Samanta, P.; Vesely, W.

    1993-01-01

    This paper describes component level reliability models that use information on degradation to predict component reliability, and which have been used to evaluate different maintenance and testing policies. The models are based on continuous time Markov processes, and are a generalization of reliability models currently used in Probabilistic Risk Assessment. An explanation of the models, the model parameters, and an example of how these models can be used to evaluate maintenance policies are discussed

  6. An ontology-based approach for modelling architectural styles

    OpenAIRE

    Pahl, Claus; Giesecke, Simon; Hasselbring, Wilhelm

    2007-01-01

    peer-reviewed The conceptual modelling of software architectures is of central importance for the quality of a software system. A rich modelling language is required to integrate the different aspects of architecture modelling, such as architectural styles, structural and behavioural modelling, into a coherent framework.We propose an ontological approach for architectural style modelling based on description logic as an abstract, meta-level modelling instrument. Architect...

  7. Model-Based Design Tools for Extending COTS Components To Extreme Environments, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation in this project is model-based design (MBD) tools for predicting the performance and useful life of commercial-off-the-shelf (COTS) components and...

  8. Stochastic Modeling Of Wind Turbine Drivetrain Components

    DEFF Research Database (Denmark)

    Rafsanjani, Hesam Mirzaei; Sørensen, John Dalsgaard

    2014-01-01

    reliable components are needed for wind turbine. In this paper focus is on reliability of critical components in drivetrain such as bearings and shafts. High failure rates of these components imply a need for more reliable components. To estimate the reliability of these components, stochastic models...... are needed for initial defects and damage accumulation. In this paper, stochastic models are formulated considering some of the failure modes observed in these components. The models are based on theoretical considerations, manufacturing uncertainties, size effects of different scales. It is illustrated how...

  9. A finite element method based microwave heat transfer modeling of frozen multi-component foods

    Science.gov (United States)

    Pitchai, Krishnamoorthy

    Microwave heating is fast and convenient, but is highly non-uniform. Non-uniform heating in microwave cooking affects not only food quality but also food safety. Most food industries develop microwavable food products based on "cook-and-look" approach. This approach is time-consuming, labor intensive and expensive and may not result in optimal food product design that assures food safety and quality. Design of microwavable food can be realized through a simulation model which describes the physical mechanisms of microwave heating in mathematical expressions. The objective of this study was to develop a microwave heat transfer model to predict spatial and temporal profiles of various heterogeneous foods such as multi-component meal (chicken nuggets and mashed potato), multi-component and multi-layered meal (lasagna), and multi-layered food with active packages (pizza) during microwave heating. A microwave heat transfer model was developed by solving electromagnetic and heat transfer equations using finite element method in commercially available COMSOL Multiphysics v4.4 software. The microwave heat transfer model included detailed geometry of the cavity, phase change, and rotation of the food on the turntable. The predicted spatial surface temperature patterns and temporal profiles were validated against the experimental temperature profiles obtained using a thermal imaging camera and fiber-optic sensors. The predicted spatial surface temperature profile of different multi-component foods was in good agreement with the corresponding experimental profiles in terms of hot and cold spot patterns. The root mean square error values of temporal profiles ranged from 5.8 °C to 26.2 °C in chicken nuggets as compared 4.3 °C to 4.7 °C in mashed potatoes. In frozen lasagna, root mean square error values at six locations ranged from 6.6 °C to 20.0 °C for 6 min of heating. A microwave heat transfer model was developed to include susceptor assisted microwave heating of a

  10. Model-integrating software components engineering flexible software systems

    CERN Document Server

    Derakhshanmanesh, Mahdi

    2015-01-01

    In his study, Mahdi Derakhshanmanesh builds on the state of the art in modeling by proposing to integrate models into running software on the component-level without translating them to code. Such so-called model-integrating software exploits all advantages of models: models implicitly support a good separation of concerns, they are self-documenting and thus improve understandability and maintainability and in contrast to model-driven approaches there is no synchronization problem anymore between the models and the code generated from them. Using model-integrating components, software will be

  11. Modified multiblock partial least squares path modeling algorithm with backpropagation neural networks approach

    Science.gov (United States)

    Yuniarto, Budi; Kurniawan, Robert

    2017-03-01

    PLS Path Modeling (PLS-PM) is different from covariance based SEM, where PLS-PM use an approach based on variance or component, therefore, PLS-PM is also known as a component based SEM. Multiblock Partial Least Squares (MBPLS) is a method in PLS regression which can be used in PLS Path Modeling which known as Multiblock PLS Path Modeling (MBPLS-PM). This method uses an iterative procedure in its algorithm. This research aims to modify MBPLS-PM with Back Propagation Neural Network approach. The result is MBPLS-PM algorithm can be modified using the Back Propagation Neural Network approach to replace the iterative process in backward and forward step to get the matrix t and the matrix u in the algorithm. By modifying the MBPLS-PM algorithm using Back Propagation Neural Network approach, the model parameters obtained are relatively not significantly different compared to model parameters obtained by original MBPLS-PM algorithm.

  12. Event-based model diagnosis of rainfall-runoff model structures

    International Nuclear Information System (INIS)

    Stanzel, P.

    2012-01-01

    The objective of this research is a comparative evaluation of different rainfall-runoff model structures. Comparative model diagnostics facilitate the assessment of strengths and weaknesses of each model. The application of multiple models allows an analysis of simulation uncertainties arising from the selection of model structure, as compared with effects of uncertain parameters and precipitation input. Four different model structures, including conceptual and physically based approaches, are compared. In addition to runoff simulations, results for soil moisture and the runoff components of overland flow, interflow and base flow are analysed. Catchment runoff is simulated satisfactorily by all four model structures and shows only minor differences. Systematic deviations from runoff observations provide insight into model structural deficiencies. While physically based model structures capture some single runoff events better, they do not generally outperform conceptual model structures. Contributions to uncertainty in runoff simulations stemming from the choice of model structure show similar dimensions to those arising from parameter selection and the representation of precipitation input. Variations in precipitation mainly affect the general level and peaks of runoff, while different model structures lead to different simulated runoff dynamics. Large differences between the four analysed models are detected for simulations of soil moisture and, even more pronounced, runoff components. Soil moisture changes are more dynamical in the physically based model structures, which is in better agreement with observations. Streamflow contributions of overland flow are considerably lower in these models than in the more conceptual approaches. Observations of runoff components are rarely made and are not available in this study, but are shown to have high potential for an effective selection of appropriate model structures (author) [de

  13. The artifacts of component-based development

    International Nuclear Information System (INIS)

    Rizwan, M.; Qureshi, J.; Hayat, S.A.

    2007-01-01

    Component based development idea was floated in a conference name Mass Produced Software Components in 1968 (1). Since then engineering and scientific libraries are developed to reuse the previously developed functions. This concept is now widely used in SW development as component based development (CBD). Component-based software engineering (CBSE) is used to develop/ assemble software from existing components (2). Software developed using components is called component where (3). This paper presents different architectures of CBD such as Active X, common object request broker architecture (CORBA), remote method invocation (RMI) and simple object access protocol (SOAP). The overall objective of this paper is to support the practice of CBD by comparing its advantages and disadvantages. This paper also evaluates object oriented process model to adapt it for CBD. (author)

  14. Structural health monitoring of power plant components based on a local temperature measurement concept

    International Nuclear Information System (INIS)

    Rudolph, Juergen; Bergholz, S.; Hilpert, R.; Jouan, B.; Goetz, A.

    2012-01-01

    The fatigue assessment of power plant components based on fatigue monitoring approaches is an essential part of the integrity concept and modern lifetime management. It is comparable to structural health monitoring approaches in other engineering fields. The methods of fatigue evaluation of nuclear power plant components based on realistic thermal load data measured on the plant are addressed. In this context the Fast Fatigue Evaluation (FFE) and Detailed Fatigue Calculation (DFC) of nuclear power plant components are parts of the three staged approach to lifetime assessment and lifetime management of the AREVA Fatigue Concept (AFC). The three stages Simplified Fatigue Estimation (SFE), Fast Fatigue Evaluation (FFE) and Detailed Fatigue Calculation (DFC) are characterized by increasing calculation effort and decreasing degree of conservatism. Their application is case dependent. The quality of the fatigue lifetime assessment essentially depends on one hand on the fatigue model assumptions and on the other hand on the load data as the basic input. In the case of nuclear power plant components thermal transient loading is most fatigue relevant. Usual global fatigue monitoring approaches rely on measured data from plant instrumentation. As an extension, the application of a local fatigue monitoring strategy (to be described in detail within the scope of this paper) paves the way of delivering continuously (nowadays at a frequency of 1 Hz) realistic load data at the fatigue relevant locations. Methods of qualified processing of these data are discussed in detail. Particularly, the processing of arbitrary operational load sequences and the derivation of representative model transients is discussed. This approach related to realistic load-time histories is principally applicable for all fatigue relevant components and ensures a realistic fatigue evaluation. (orig.)

  15. Feedback structure based entropy approach for multiple-model estimation

    Institute of Scientific and Technical Information of China (English)

    Shen-tu Han; Xue Anke; Guo Yunfei

    2013-01-01

    The variable-structure multiple-model (VSMM) approach, one of the multiple-model (MM) methods, is a popular and effective approach in handling problems with mode uncertainties. The model sequence set adaptation (MSA) is the key to design a better VSMM. However, MSA methods in the literature have big room to improve both theoretically and practically. To this end, we propose a feedback structure based entropy approach that could find the model sequence sets with the smallest size under certain conditions. The filtered data are fed back in real time and can be used by the minimum entropy (ME) based VSMM algorithms, i.e., MEVSMM. Firstly, the full Markov chains are used to achieve optimal solutions. Secondly, the myopic method together with particle filter (PF) and the challenge match algorithm are also used to achieve sub-optimal solutions, a trade-off between practicability and optimality. The numerical results show that the proposed algorithm provides not only refined model sets but also a good robustness margin and very high accuracy.

  16. Experimental oligopolies modeling: A dynamic approach based on heterogeneous behaviors

    Science.gov (United States)

    Cerboni Baiardi, Lorenzo; Naimzada, Ahmad K.

    2018-05-01

    In the rank of behavioral rules, imitation-based heuristics has received special attention in economics (see [14] and [12]). In particular, imitative behavior is considered in order to understand the evidences arising in experimental oligopolies which reveal that the Cournot-Nash equilibrium does not emerge as unique outcome and show that an important component of the production at the competitive level is observed (see e.g.[1,3,9] or [7,10]). By considering the pioneering groundbreaking approach of [2], we build a dynamical model of linear oligopolies where heterogeneous decision mechanisms of players are made explicit. In particular, we consider two different types of quantity setting players characterized by different decision mechanisms that coexist and operate simultaneously: agents that adaptively adjust their choices towards the direction that increases their profit are embedded with imitator agents. The latter ones use a particular form of proportional imitation rule that considers the awareness about the presence of strategic interactions. It is noteworthy that the Cournot-Nash outcome is a stationary state of our models. Our thesis is that the chaotic dynamics arousing from a dynamical model, where heterogeneous players are considered, are capable to qualitatively reproduce the outcomes of experimental oligopolies.

  17. Towards a 3d Spatial Urban Energy Modelling Approach

    Science.gov (United States)

    Bahu, J.-M.; Koch, A.; Kremers, E.; Murshed, S. M.

    2013-09-01

    Today's needs to reduce the environmental impact of energy use impose dramatic changes for energy infrastructure and existing demand patterns (e.g. buildings) corresponding to their specific context. In addition, future energy systems are expected to integrate a considerable share of fluctuating power sources and equally a high share of distributed generation of electricity. Energy system models capable of describing such future systems and allowing the simulation of the impact of these developments thus require a spatial representation in order to reflect the local context and the boundary conditions. This paper describes two recent research approaches developed at EIFER in the fields of (a) geo-localised simulation of heat energy demand in cities based on 3D morphological data and (b) spatially explicit Agent-Based Models (ABM) for the simulation of smart grids. 3D city models were used to assess solar potential and heat energy demand of residential buildings which enable cities to target the building refurbishment potentials. Distributed energy systems require innovative modelling techniques where individual components are represented and can interact. With this approach, several smart grid demonstrators were simulated, where heterogeneous models are spatially represented. Coupling 3D geodata with energy system ABMs holds different advantages for both approaches. On one hand, energy system models can be enhanced with high resolution data from 3D city models and their semantic relations. Furthermore, they allow for spatial analysis and visualisation of the results, with emphasis on spatially and structurally correlations among the different layers (e.g. infrastructure, buildings, administrative zones) to provide an integrated approach. On the other hand, 3D models can benefit from more detailed system description of energy infrastructure, representing dynamic phenomena and high resolution models for energy use at component level. The proposed modelling strategies

  18. Structured Performance Analysis for Component Based Systems

    OpenAIRE

    Salmi , N.; Moreaux , Patrice; Ioualalen , M.

    2012-01-01

    International audience; The Component Based System (CBS) paradigm is now largely used to design software systems. In addition, performance and behavioural analysis remains a required step for the design and the construction of efficient systems. This is especially the case of CBS, which involve interconnected components running concurrent processes. % This paper proposes a compositional method for modeling and structured performance analysis of CBS. Modeling is based on Stochastic Well-formed...

  19. Bridging process-based and empirical approaches to modeling tree growth

    Science.gov (United States)

    Harry T. Valentine; Annikki Makela; Annikki Makela

    2005-01-01

    The gulf between process-based and empirical approaches to modeling tree growth may be bridged, in part, by the use of a common model. To this end, we have formulated a process-based model of tree growth that can be fitted and applied in an empirical mode. The growth model is grounded in pipe model theory and an optimal control model of crown development. Together, the...

  20. Research of component-based software development approach in RFID field%RFID领域软件构件化开发技术研究

    Institute of Scientific and Technical Information of China (English)

    徐孟娟; 杨威

    2012-01-01

    将软件构件化开发技术应用至RFID领域.基于领域工程的分析方法,对RFID领域内变化性需求进行封装、隔离和抽象,分析出RFID体系架构,提炼出RFID软件构件模型。针对构件的管理,研究了RFID构件的分类方法,提出刻面分类法.并详细描述RFID软件构件分类的刻面及每个刻面的术语空间。%Component-based software development approach had been applied in RFID field in this paper. Based on the analysis method of domain engineering, to encapsulate isolate and abstract the demands within the area of RFID , the RFID software component model had been extracted after its architecture was analyzed. For the management of the component, faceted classification method, a detailed description of the space and each facet terms of RFID software component had been given in the paper.

  1. Raster-Based Approach to Solar Pressure Modeling

    Science.gov (United States)

    Wright, Theodore W. II

    2013-01-01

    An algorithm has been developed to take advantage of the graphics processing hardware in modern computers to efficiently compute high-fidelity solar pressure forces and torques on spacecraft, taking into account the possibility of self-shading due to the articulation of spacecraft components such as solar arrays. The process is easily extended to compute other results that depend on three-dimensional attitude analysis, such as solar array power generation or free molecular flow drag. The impact of photons upon a spacecraft introduces small forces and moments. The magnitude and direction of the forces depend on the material properties of the spacecraft components being illuminated. The parts of the components being lit depends on the orientation of the craft with respect to the Sun, as well as the gimbal angles for any significant moving external parts (solar arrays, typically). Some components may shield others from the Sun. The purpose of this innovation is to enable high-fidelity computation of solar pressure and power generation effects of illuminated portions of spacecraft, taking self-shading from spacecraft attitude and movable components into account. The key idea in this innovation is to compute results dependent upon complicated geometry by using an image to break the problem into thousands or millions of sub-problems with simple geometry, and then the results from the simpler problems are combined to give high-fidelity results for the full geometry. This process is performed by constructing a 3D model of a spacecraft using an appropriate computer language (OpenGL), and running that model on a modern computer's 3D accelerated video processor. This quickly and accurately generates a view of the model (as shown on a computer screen) that takes rotation and articulation of spacecraft components into account. When this view is interpreted as the spacecraft as seen by the Sun, then only the portions of the craft visible in the view are illuminated. The view as

  2. Hypercompetitive Environments: An Agent-based model approach

    Science.gov (United States)

    Dias, Manuel; Araújo, Tanya

    Information technology (IT) environments are characterized by complex changes and rapid evolution. Globalization and the spread of technological innovation have increased the need for new strategic information resources, both from individual firms and management environments. Improvements in multidisciplinary methods and, particularly, the availability of powerful computational tools, are giving researchers an increasing opportunity to investigate management environments in their true complex nature. The adoption of a complex systems approach allows for modeling business strategies from a bottom-up perspective — understood as resulting from repeated and local interaction of economic agents — without disregarding the consequences of the business strategies themselves to individual behavior of enterprises, emergence of interaction patterns between firms and management environments. Agent-based models are at the leading approach of this attempt.

  3. Model of the fine-grain component of martian soil based on Viking lander data

    International Nuclear Information System (INIS)

    Nussinov, M.D.; Chernyak, Y.B.; Ettinger, J.L.

    1978-01-01

    A model of the fine-grain component of the Martian soil is proposed. The model is based on well-known physical phenomena, and enables an explanation of the evolution of the gases released in the GEX (gas exchange experiments) and GCMS (gas chromatography-mass spectrometer experiments) of the Viking landers. (author)

  4. Nonlinear Process Fault Diagnosis Based on Serial Principal Component Analysis.

    Science.gov (United States)

    Deng, Xiaogang; Tian, Xuemin; Chen, Sheng; Harris, Chris J

    2018-03-01

    Many industrial processes contain both linear and nonlinear parts, and kernel principal component analysis (KPCA), widely used in nonlinear process monitoring, may not offer the most effective means for dealing with these nonlinear processes. This paper proposes a new hybrid linear-nonlinear statistical modeling approach for nonlinear process monitoring by closely integrating linear principal component analysis (PCA) and nonlinear KPCA using a serial model structure, which we refer to as serial PCA (SPCA). Specifically, PCA is first applied to extract PCs as linear features, and to decompose the data into the PC subspace and residual subspace (RS). Then, KPCA is performed in the RS to extract the nonlinear PCs as nonlinear features. Two monitoring statistics are constructed for fault detection, based on both the linear and nonlinear features extracted by the proposed SPCA. To effectively perform fault identification after a fault is detected, an SPCA similarity factor method is built for fault recognition, which fuses both the linear and nonlinear features. Unlike PCA and KPCA, the proposed method takes into account both linear and nonlinear PCs simultaneously, and therefore, it can better exploit the underlying process's structure to enhance fault diagnosis performance. Two case studies involving a simulated nonlinear process and the benchmark Tennessee Eastman process demonstrate that the proposed SPCA approach is more effective than the existing state-of-the-art approach based on KPCA alone, in terms of nonlinear process fault detection and identification.

  5. An Intuitionistic Fuzzy Methodology for Component-Based Software Reliability Optimization

    DEFF Research Database (Denmark)

    Madsen, Henrik; Grigore, Albeanu; Popenţiuvlǎdicescu, Florin

    2012-01-01

    Component-based software development is the current methodology facilitating agility in project management, software reuse in design and implementation, promoting quality and productivity, and increasing the reliability and performability. This paper illustrates the usage of intuitionistic fuzzy...... degree approach in modelling the quality of entities in imprecise software reliability computing in order to optimize management results. Intuitionistic fuzzy optimization algorithms are proposed to be used for complex software systems reliability optimization under various constraints....

  6. Scientific data base for safeguards components

    International Nuclear Information System (INIS)

    Hall, R.C.; Jones, R.D.

    1978-01-01

    The need to store and maintain vast amounts of data and the desire to avoid nonfunctional redundancy have provided an impetus for modern data base technology. Large-scale data base management systems (DBMS) have emerged during the past two decades evolving from earlier generalized file processing systems. This evolution has primarily involved certain business applications (e.g., production control, payroll, order entry) because of their high volume data processing characterization. Current data base technology, however, is becoming increasingly concerned with generality. Many diverse applications, including scientific ones, are benefiting from the generalized data base management software which has resulted. The concept of a data base management system is examined. The three common models which have been proposed for organizing data and relationships are identified: the network model, the hierarchical model, and the relational model. A specific implementation using a hierarchical data base management system is described. This is the data base for safeguards components which has been developed at Sandia Laboratories using the System 2000 developed by MRI Systems Corporation. Its organization, components, and functions are presented. The various interfaces it permits to user programs (e.g., safeguards automated facility evaluation software) and interactive terminal users are described

  7. A model for determining condition-based maintenance policies for deteriorating multi-component systems

    NARCIS (Netherlands)

    Hontelez, J.A.M.; Wijnmalen, D.J.D.

    1993-01-01

    We discuss a method to determine strategies for preventive maintenance of systems consisting of gradually deteriorating components. A model has been developed to compute not only the range of conditions inducing a repair action, but also inspection moments based on the last known condition value so

  8. Fault Diagnosis approach based on a model-based reasoner and a functional designer for a wind turbine. An approach towards self-maintenance

    Science.gov (United States)

    Echavarria, E.; Tomiyama, T.; van Bussel, G. J. W.

    2007-07-01

    The objective of this on-going research is to develop a design methodology to increase the availability for offshore wind farms, by means of an intelligent maintenance system capable of responding to faults by reconfiguring the system or subsystems, without increasing service visits, complexity, or costs. The idea is to make use of the existing functional redundancies within the system and sub-systems to keep the wind turbine operational, even at a reduced capacity if necessary. Re-configuration is intended to be a built-in capability to be used as a repair strategy, based on these existing functionalities provided by the components. The possible solutions can range from using information from adjacent wind turbines, such as wind speed and direction, to setting up different operational modes, for instance re-wiring, re-connecting, changing parameters or control strategy. The methodology described in this paper is based on qualitative physics and consists of a fault diagnosis system based on a model-based reasoner (MBR), and on a functional redundancy designer (FRD). Both design tools make use of a function-behaviour-state (FBS) model. A design methodology based on the re-configuration concept to achieve self-maintained wind turbines is an interesting and promising approach to reduce stoppage rate, failure events, maintenance visits, and to maintain energy output possibly at reduced rate until the next scheduled maintenance.

  9. Fault Diagnosis approach based on a model-based reasoner and a functional designer for a wind turbine. An approach towards self-maintenance

    International Nuclear Information System (INIS)

    Echavarria, E; Tomiyama, T; Bussel, G J W van

    2007-01-01

    The objective of this on-going research is to develop a design methodology to increase the availability for offshore wind farms, by means of an intelligent maintenance system capable of responding to faults by reconfiguring the system or subsystems, without increasing service visits, complexity, or costs. The idea is to make use of the existing functional redundancies within the system and sub-systems to keep the wind turbine operational, even at a reduced capacity if necessary. Re-configuration is intended to be a built-in capability to be used as a repair strategy, based on these existing functionalities provided by the components. The possible solutions can range from using information from adjacent wind turbines, such as wind speed and direction, to setting up different operational modes, for instance re-wiring, re-connecting, changing parameters or control strategy. The methodology described in this paper is based on qualitative physics and consists of a fault diagnosis system based on a model-based reasoner (MBR), and on a functional redundancy designer (FRD). Both design tools make use of a function-behaviour-state (FBS) model. A design methodology based on the re-configuration concept to achieve self-maintained wind turbines is an interesting and promising approach to reduce stoppage rate, failure events, maintenance visits, and to maintain energy output possibly at reduced rate until the next scheduled maintenance

  10. A model-based prognostic approach to predict interconnect failure using impedance analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Dae Il; Yoon, Jeong Ah [Dept. of System Design and Control Engineering. Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of)

    2016-10-15

    The reliability of electronic assemblies is largely affected by the health of interconnects, such as solder joints, which provide mechanical, electrical and thermal connections between circuit components. During field lifecycle conditions, interconnects are often subjected to a DC open circuit, one of the most common interconnect failure modes, due to cracking. An interconnect damaged by cracking is sometimes extremely hard to detect when it is a part of a daisy-chain structure, neighboring with other healthy interconnects that have not yet cracked. This cracked interconnect may seem to provide a good electrical contact due to the compressive load applied by the neighboring healthy interconnects, but it can cause the occasional loss of electrical continuity under operational and environmental loading conditions in field applications. Thus, cracked interconnects can lead to the intermittent failure of electronic assemblies and eventually to permanent failure of the product or the system. This paper introduces a model-based prognostic approach to quantitatively detect and predict interconnect failure using impedance analysis and particle filtering. Impedance analysis was previously reported as a sensitive means of detecting incipient changes at the surface of interconnects, such as cracking, based on the continuous monitoring of RF impedance. To predict the time to failure, particle filtering was used as a prognostic approach using the Paris model to address the fatigue crack growth. To validate this approach, mechanical fatigue tests were conducted with continuous monitoring of RF impedance while degrading the solder joints under test due to fatigue cracking. The test results showed the RF impedance consistently increased as the solder joints were degraded due to the growth of cracks, and particle filtering predicted the time to failure of the interconnects similarly to their actual timesto- failure based on the early sensitivity of RF impedance.

  11. Optimal condition-based maintenance decisions for systems with dependent stochastic degradation of components

    International Nuclear Information System (INIS)

    Hong, H.P.; Zhou, W.; Zhang, S.; Ye, W.

    2014-01-01

    Components in engineered systems are subjected to stochastic deterioration due to the operating environmental conditions, and the uncertainty in material properties. The components need to be inspected and possibly replaced based on preventive or failure replacement criteria to provide the intended and safe operation of the system. In the present study, we investigate the influence of dependent stochastic degradation of multiple components on the optimal maintenance decisions. We use copula to model the dependent stochastic degradation of components, and formulate the optimal decision problem based on the minimum expected cost rule and the stochastic dominance rules. The latter is used to cope with decision maker's risk attitude. We illustrate the developed probabilistic analysis approach and the influence of the dependency of the stochastic degradation on the preferred decisions through numerical examples

  12. Bayesian Multi-Energy Computed Tomography reconstruction approaches based on decomposition models

    International Nuclear Information System (INIS)

    Cai, Caifang

    2013-01-01

    Multi-Energy Computed Tomography (MECT) makes it possible to get multiple fractions of basis materials without segmentation. In medical application, one is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical MECT measurements are usually obtained with polychromatic X-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam poly-chromaticity fail to estimate the correct decomposition fractions and result in Beam-Hardening Artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log pre-processing and the water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on non-linear forward models counting the beam poly-chromaticity show great potential for giving accurate fraction images.This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint Maximum A Posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a non-quadratic cost function. To solve it, the use of a monotone Conjugate Gradient (CG) algorithm with suboptimal descent steps is proposed.The performances of the proposed approach are analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  13. Model-based internal wave processing

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J.V.; Chambers, D.H.

    1995-06-09

    A model-based approach is proposed to solve the oceanic internal wave signal processing problem that is based on state-space representations of the normal-mode vertical velocity and plane wave horizontal velocity propagation models. It is shown that these representations can be utilized to spatially propagate the modal (dept) vertical velocity functions given the basic parameters (wave numbers, Brunt-Vaisala frequency profile etc.) developed from the solution of the associated boundary value problem as well as the horizontal velocity components. Based on this framework, investigations are made of model-based solutions to the signal enhancement problem for internal waves.

  14. Mobile Agent-Based Software Systems Modeling Approaches: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Aissam Belghiat

    2016-06-01

    Full Text Available Mobile agent-based applications are special type of software systems which take the advantages of mobile agents in order to provide a new beneficial paradigm to solve multiple complex problems in several fields and areas such as network management, e-commerce, e-learning, etc. Likewise, we notice lack of real applications based on this paradigm and lack of serious evaluations of their modeling approaches. Hence, this paper provides a comparative study of modeling approaches of mobile agent-based software systems. The objective is to give the reader an overview and a thorough understanding of the work that has been done and where the gaps in the research are.

  15. Models for integrated components coupled with their EM environment

    NARCIS (Netherlands)

    Ioan, D.; Schilders, W.H.A.; Ciuprina, G.; Meijs, van der N.P.; Schoenmaker, W.

    2008-01-01

    Abstract: Purpose – The main aim of this study is the modelling of the interaction of on-chip components with their electromagnetic environment. Design/methodology/approach – The integrated circuit is decomposed in passive and active components interconnected by means of terminals and connectors

  16. Sparse Principal Component Analysis in Medical Shape Modeling

    DEFF Research Database (Denmark)

    Sjöstrand, Karl; Stegmann, Mikkel Bille; Larsen, Rasmus

    2006-01-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims...... analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of sufficiently small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA...

  17. Ranking and selection of commercial off-the-shelf using fuzzy distance based approach

    Directory of Open Access Journals (Sweden)

    Rakesh Garg

    2015-06-01

    Full Text Available There is a tremendous growth of the use of the component based software engineering (CBSE approach for the development of software systems. The selection of the best suited COTS components which fulfils the necessary requirement for the development of software(s has become a major challenge for the software developers. The complexity of the optimal selection problem increases with an increase in alternative potential COTS components and the corresponding selection criteria. In this research paper, the problem of ranking and selection of Data Base Management Systems (DBMS components is modeled as a multi-criteria decision making problem. A ‘Fuzzy Distance Based Approach (FDBA’ method is proposed for the optimal ranking and selection of DBMS COTS components of an e-payment system based on 14 selection criteria grouped under three major categories i.e. ‘Vendor Capabilities’, ‘Business Issues’ and ‘Cost’. The results of this method are compared with other Analytical Hierarchy Process (AHP which is termed as a typical multi-criteria decision making approach. The proposed methodology is explained with an illustrated example.

  18. A security modeling approach for web-service-based business processes

    DEFF Research Database (Denmark)

    Jensen, Meiko; Feja, Sven

    2009-01-01

    a transformation that automatically derives WS-SecurityPolicy-conformant security policies from the process model, which in conjunction with the generated WS-BPEL processes and WSDL documents provides the ability to deploy and run the complete security-enhanced process based on Web Service technology.......The rising need for security in SOA applications requires better support for management of non-functional properties in web-based business processes. Here, the model-driven approach may provide valuable benefits in terms of maintainability and deployment. Apart from modeling the pure functionality...... of a process, the consideration of security properties at the level of a process model is a promising approach. In this work-in-progress paper we present an extension to the ARIS SOA Architect that is capable of modeling security requirements as a separate security model view. Further we provide...

  19. Query Language for Location-Based Services: A Model Checking Approach

    Science.gov (United States)

    Hoareau, Christian; Satoh, Ichiro

    We present a model checking approach to the rationale, implementation, and applications of a query language for location-based services. Such query mechanisms are necessary so that users, objects, and/or services can effectively benefit from the location-awareness of their surrounding environment. The underlying data model is founded on a symbolic model of space organized in a tree structure. Once extended to a semantic model for modal logic, we regard location query processing as a model checking problem, and thus define location queries as hybrid logicbased formulas. Our approach is unique to existing research because it explores the connection between location models and query processing in ubiquitous computing systems, relies on a sound theoretical basis, and provides modal logic-based query mechanisms for expressive searches over a decentralized data structure. A prototype implementation is also presented and will be discussed.

  20. Built-In Data-Flow Integration Testing in Large-Scale Component-Based Systems

    Science.gov (United States)

    Piel, Éric; Gonzalez-Sanchez, Alberto; Gross, Hans-Gerhard

    Modern large-scale component-based applications and service ecosystems are built following a number of different component models and architectural styles, such as the data-flow architectural style. In this style, each building block receives data from a previous one in the flow and sends output data to other components. This organisation expresses information flows adequately, and also favours decoupling between the components, leading to easier maintenance and quicker evolution of the system. Integration testing is a major means to ensure the quality of large systems. Their size and complexity, together with the fact that they are developed and maintained by several stake holders, make Built-In Testing (BIT) an attractive approach to manage their integration testing. However, so far no technique has been proposed that combines BIT and data-flow integration testing. We have introduced the notion of a virtual component in order to realize such a combination. It permits to define the behaviour of several components assembled to process a flow of data, using BIT. Test-cases are defined in a way that they are simple to write and flexible to adapt. We present two implementations of our proposed virtual component integration testing technique, and we extend our previous proposal to detect and handle errors in the definition by the user. The evaluation of the virtual component testing approach suggests that more issues can be detected in systems with data-flows than through other integration testing approaches.

  1. Dual-component model of respiratory motion based on the periodic autoregressive moving average (periodic ARMA) method

    International Nuclear Information System (INIS)

    McCall, K C; Jeraj, R

    2007-01-01

    A new approach to the problem of modelling and predicting respiration motion has been implemented. This is a dual-component model, which describes the respiration motion as a non-periodic time series superimposed onto a periodic waveform. A periodic autoregressive moving average algorithm has been used to define a mathematical model of the periodic and non-periodic components of the respiration motion. The periodic components of the motion were found by projecting multiple inhale-exhale cycles onto a common subspace. The component of the respiration signal that is left after removing this periodicity is a partially autocorrelated time series and was modelled as an autoregressive moving average (ARMA) process. The accuracy of the periodic ARMA model with respect to fluctuation in amplitude and variation in length of cycles has been assessed. A respiration phantom was developed to simulate the inter-cycle variations seen in free-breathing and coached respiration patterns. At ±14% variability in cycle length and maximum amplitude of motion, the prediction errors were 4.8% of the total motion extent for a 0.5 s ahead prediction, and 9.4% at 1.0 s lag. The prediction errors increased to 11.6% at 0.5 s and 21.6% at 1.0 s when the respiration pattern had ±34% variations in both these parameters. Our results have shown that the accuracy of the periodic ARMA model is more strongly dependent on the variations in cycle length than the amplitude of the respiration cycles

  2. Research on The Construction of Flexible Multi-body Dynamics Model based on Virtual Components

    Science.gov (United States)

    Dong, Z. H.; Ye, X.; Yang, F.

    2018-05-01

    Focus on the harsh operation condition of space manipulator, which cannot afford relative large collision momentum, this paper proposes a new concept and technology, called soft-contact technology. In order to solve the problem of collision dynamics of flexible multi-body system caused by this technology, this paper also proposes the concepts of virtual components and virtual hinges, and constructs flexible dynamic model based on virtual components, and also studies on its solutions. On this basis, this paper uses NX to carry out model and comparison simulation for space manipulator in 3 different modes. The results show that using the model of multi-rigid body + flexible body hinge + controllable damping can make effective control on amplitude for the force and torque caused by target satellite collision.

  3. Imprecise system reliability and component importance based on survival signature

    International Nuclear Information System (INIS)

    Feng, Geng; Patelli, Edoardo; Beer, Michael; Coolen, Frank P.A.

    2016-01-01

    The concept of the survival signature has recently attracted increasing attention for performing reliability analysis on systems with multiple types of components. It opens a new pathway for a structured approach with high computational efficiency based on a complete probabilistic description of the system. In practical applications, however, some of the parameters of the system might not be defined completely due to limited data, which implies the need to take imprecisions of component specifications into account. This paper presents a methodology to include explicitly the imprecision, which leads to upper and lower bounds of the survival function of the system. In addition, the approach introduces novel and efficient component importance measures. By implementing relative importance index of each component without or with imprecision, the most critical component in the system can be identified depending on the service time of the system. Simulation method based on survival signature is introduced to deal with imprecision within components, which is precise and efficient. Numerical example is presented to show the applicability of the approach for systems. - Highlights: • Survival signature is a novel way for system reliability and component importance • High computational efficiency based on a complete description of system. • Include explicitly the imprecision, which leads to bounds of the survival function. • A novel relative importance index is proposed as importance measure. • Allows to identify critical components depending on the service time of the system.

  4. New approaches in agent-based modeling of complex financial systems

    Science.gov (United States)

    Chen, Ting-Ting; Zheng, Bo; Li, Yan; Jiang, Xiong-Fei

    2017-12-01

    Agent-based modeling is a powerful simulation technique to understand the collective behavior and microscopic interaction in complex financial systems. Recently, the concept for determining the key parameters of agent-based models from empirical data instead of setting them artificially was suggested. We first review several agent-based models and the new approaches to determine the key model parameters from historical market data. Based on the agents' behaviors with heterogeneous personal preferences and interactions, these models are successful in explaining the microscopic origination of the temporal and spatial correlations of financial markets. We then present a novel paradigm combining big-data analysis with agent-based modeling. Specifically, from internet query and stock market data, we extract the information driving forces and develop an agent-based model to simulate the dynamic behaviors of complex financial systems.

  5. Reliability-based sensitivity of mechanical components with arbitrary distribution parameters

    International Nuclear Information System (INIS)

    Zhang, Yi Min; Yang, Zhou; Wen, Bang Chun; He, Xiang Dong; Liu, Qiaoling

    2010-01-01

    This paper presents a reliability-based sensitivity method for mechanical components with arbitrary distribution parameters. Techniques from the perturbation method, the Edgeworth series, the reliability-based design theory, and the sensitivity analysis approach were employed directly to calculate the reliability-based sensitivity of mechanical components on the condition that the first four moments of the original random variables are known. The reliability-based sensitivity information of the mechanical components can be accurately and quickly obtained using a practical computer program. The effects of the design parameters on the reliability of mechanical components were studied. The method presented in this paper provides the theoretic basis for the reliability-based design of mechanical components

  6. A Nonlinear Model for Gene-Based Gene-Environment Interaction

    Directory of Open Access Journals (Sweden)

    Jian Sa

    2016-06-01

    Full Text Available A vast amount of literature has confirmed the role of gene-environment (G×E interaction in the etiology of complex human diseases. Traditional methods are predominantly focused on the analysis of interaction between a single nucleotide polymorphism (SNP and an environmental variable. Given that genes are the functional units, it is crucial to understand how gene effects (rather than single SNP effects are influenced by an environmental variable to affect disease risk. Motivated by the increasing awareness of the power of gene-based association analysis over single variant based approach, in this work, we proposed a sparse principle component regression (sPCR model to understand the gene-based G×E interaction effect on complex disease. We first extracted the sparse principal components for SNPs in a gene, then the effect of each principal component was modeled by a varying-coefficient (VC model. The model can jointly model variants in a gene in which their effects are nonlinearly influenced by an environmental variable. In addition, the varying-coefficient sPCR (VC-sPCR model has nice interpretation property since the sparsity on the principal component loadings can tell the relative importance of the corresponding SNPs in each component. We applied our method to a human birth weight dataset in Thai population. We analyzed 12,005 genes across 22 chromosomes and found one significant interaction effect using the Bonferroni correction method and one suggestive interaction. The model performance was further evaluated through simulation studies. Our model provides a system approach to evaluate gene-based G×E interaction.

  7. Statistical intercomparison of global climate models: A common principal component approach with application to GCM data

    International Nuclear Information System (INIS)

    Sengupta, S.K.; Boyle, J.S.

    1993-05-01

    Variables describing atmospheric circulation and other climate parameters derived from various GCMs and obtained from observations can be represented on a spatio-temporal grid (lattice) structure. The primary objective of this paper is to explore existing as well as some new statistical methods to analyze such data structures for the purpose of model diagnostics and intercomparison from a statistical perspective. Among the several statistical methods considered here, a new method based on common principal components appears most promising for the purpose of intercomparison of spatio-temporal data structures arising in the task of model/model and model/data intercomparison. A complete strategy for such an intercomparison is outlined. The strategy includes two steps. First, the commonality of spatial structures in two (or more) fields is captured in the common principal vectors. Second, the corresponding principal components obtained as time series are then compared on the basis of similarities in their temporal evolution

  8. An EMD–SARIMA-Based Modeling Approach for Air Traffic Forecasting

    Directory of Open Access Journals (Sweden)

    Wei Nai

    2017-12-01

    Full Text Available The ever-increasing air traffic demand in China has brought huge pressure on the planning and management of, and investment in, air terminals as well as airline companies. In this context, accurate and adequate short-term air traffic forecasting is essential for the operations of those entities. In consideration of such a problem, a hybrid air traffic forecasting model based on empirical mode decomposition (EMD and seasonal auto regressive integrated moving average (SARIMA has been proposed in this paper. The model proposed decomposes the original time series into components at first, and models each component with the SARIMA forecasting model, then integrates all the models together to form the final combined forecast result. By using the monthly air cargo and passenger flow data from the years 2006 to 2014 available at the official website of the Civil Aviation Administration of China (CAAC, the effectiveness in forecasting of the model proposed has been demonstrated, and by a horizontal performance comparison between several other widely used forecasting models, the advantage of the proposed model has also been proved.

  9. Model-based design approaches for plug-in hybrid vehicle design

    Energy Technology Data Exchange (ETDEWEB)

    Mendes, C.J. [CrossChasm Technologies, Cambridge, ON (Canada); Stevens, M.B.; Fowler, M.W. [Waterloo Univ., ON (Canada). Dept. of Chemical Engineering; Fraser, R.A. [Waterloo Univ., ON (Canada). Dept. of Mechanical Engineering; Wilhelm, E.J. [Paul Scherrer Inst., Villigen (Switzerland). Energy Systems Analysis

    2007-07-01

    A model-based design process for plug-in hybrid vehicles (PHEVs) was presented. The paper discussed steps between the initial design concept and a working vehicle prototype, and focused on an investigation of the software-in-the-loop (SIL), hardware-in-the-loop (HIL), and component-in-the-loop (CIL) design phases. The role and benefits of using simulation were also reviewed. A method for mapping and identifying components was provided along with a hybrid control strategy and component-level control optimization process. The role of simulation in component evaluation, architecture design, and de-bugging procedures was discussed, as well as the role simulation networks can play in speeding deployment times. The simulations focused on work performed on a 2005 Chevrolet Equinox converted to a fuel cell hybrid electric vehicle (FCHEV). Components were aggregated to create a complete virtual vehicle. A simplified vehicle model was implemented onto the on-board vehicle control hardware. Optimization metrics were estimated at 10 alpha values during each control loop iteration. The simulation was then used to tune the control system under a variety of drive cycles and conditions. A CIL technique was used to place a physical hybrid electric vehicle (HEV) component under the control of a real time HEV/PHEV simulation. It was concluded that controllers should have a standardized component description that supports integration into advanced testing procedures. 4 refs., 9 figs.

  10. Isocyanide based multi component reactions in combinatorial chemistry.

    NARCIS (Netherlands)

    Dömling, A.

    1998-01-01

    Although usually regarded as a recent development, the combinatorial approach to the synthesis of libraries of new drug candidates was first described as early as 1961 using the isocyanide-based one-pot multicomponent Ugi reaction. Isocyanide-based multi component reactions (MCR's) markedly differ

  11. Robust LOD scores for variance component-based linkage analysis.

    Science.gov (United States)

    Blangero, J; Williams, J T; Almasy, L

    2000-01-01

    The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.

  12. A fuzzy-logic-based approach to qualitative safety modelling for marine systems

    International Nuclear Information System (INIS)

    Sii, H.S.; Ruxton, Tom; Wang Jin

    2001-01-01

    Safety assessment based on conventional tools (e.g. probability risk assessment (PRA)) may not be well suited for dealing with systems having a high level of uncertainty, particularly in the feasibility and concept design stages of a maritime or offshore system. By contrast, a safety model using fuzzy logic approach employing fuzzy IF-THEN rules can model the qualitative aspects of human knowledge and reasoning processes without employing precise quantitative analyses. A fuzzy-logic-based approach may be more appropriately used to carry out risk analysis in the initial design stages. This provides a tool for working directly with the linguistic terms commonly used in carrying out safety assessment. This research focuses on the development and representation of linguistic variables to model risk levels subjectively. These variables are then quantified using fuzzy sets. In this paper, the development of a safety model using fuzzy logic approach for modelling various design variables for maritime and offshore safety based decision making in the concept design stage is presented. An example is used to illustrate the proposed approach

  13. Models for describing the thermal characteristics of building components

    DEFF Research Database (Denmark)

    Jimenez, M.J.; Madsen, Henrik

    2008-01-01

    , for example. For the analysis of these tests, dynamic analysis models and methods are required. However, a wide variety of models and methods exists, and the problem of choosing the most appropriate approach for each particular case is a non-trivial and interdisciplinary task. Knowledge of a large family....... The characteristics of each type of model are highlighted. Some available software tools for each of the methods described will be mentioned. A case study also demonstrating the difference between linear and nonlinear models is considered....... of these approaches may therefore be very useful for selecting a suitable approach for each particular case. This paper presents an overview of models that can be applied for modelling the thermal characteristics of buildings and building components using data from outdoor testing. The choice of approach depends...

  14. Around power law for PageRank components in Buckley-Osthus model of web graph

    OpenAIRE

    Gasnikov, Alexander; Zhukovskii, Maxim; Kim, Sergey; Noskov, Fedor; Plaunov, Stepan; Smirnov, Daniil

    2017-01-01

    In the paper we investigate power law for PageRank components for the Buckley-Osthus model for web graph. We compare different numerical methods for PageRank calculation. With the best method we do a lot of numerical experiments. These experiments confirm the hypothesis about power law. At the end we discuss real model of web-ranking based on the classical PageRank approach.

  15. Roadmap for Lean implementation in Indian automotive component manufacturing industry: comparative study of UNIDO Model and ISM Model

    Science.gov (United States)

    Jadhav, J. R.; Mantha, S. S.; Rane, S. B.

    2015-06-01

    The demands for automobiles increased drastically in last two and half decades in India. Many global automobile manufacturers and Tier-1 suppliers have already set up research, development and manufacturing facilities in India. The Indian automotive component industry started implementing Lean practices to fulfill the demand of these customers. United Nations Industrial Development Organization (UNIDO) has taken proactive approach in association with Automotive Component Manufacturers Association of India (ACMA) and the Government of India to assist Indian SMEs in various clusters since 1999 to make them globally competitive. The primary objectives of this research are to study the UNIDO-ACMA Model as well as ISM Model of Lean implementation and validate the ISM Model by comparing with UNIDO-ACMA Model. It also aims at presenting a roadmap for Lean implementation in Indian automotive component industry. This paper is based on secondary data which include the research articles, web articles, doctoral thesis, survey reports and books on automotive industry in the field of Lean, JIT and ISM. ISM Model for Lean practice bundles was developed by authors in consultation with Lean practitioners. The UNIDO-ACMA Model has six stages whereas ISM Model has eight phases for Lean implementation. The ISM-based Lean implementation model is validated through high degree of similarity with UNIDO-ACMA Model. The major contribution of this paper is the proposed ISM Model for sustainable Lean implementation. The ISM-based Lean implementation framework presents greater insight of implementation process at more microlevel as compared to UNIDO-ACMA Model.

  16. Modelling of acid-base titration curves of mineral assemblages

    Directory of Open Access Journals (Sweden)

    Stamberg Karel

    2016-01-01

    Full Text Available The modelling of acid-base titration curves of mineral assemblages was studied with respect to basic parameters of their surface sites to be obtained. The known modelling approaches, component additivity (CA and generalized composite (GC, and three types of different assemblages (fucoidic sandstones, sedimentary rock-clay and bentonite-magnetite samples were used. In contrary to GC-approach, application of which was without difficulties, the problem of CA-one consisted in the credibility and accessibility of the parameters characterizing the individual mineralogical components.

  17. Formal Model-Driven Engineering: Generating Data and Behavioural Components

    Directory of Open Access Journals (Sweden)

    Chen-Wei Wang

    2012-12-01

    Full Text Available Model-driven engineering is the automatic production of software artefacts from abstract models of structure and functionality. By targeting a specific class of system, it is possible to automate aspects of the development process, using model transformations and code generators that encode domain knowledge and implementation strategies. Using this approach, questions of correctness for a complex, software system may be answered through analysis of abstract models of lower complexity, under the assumption that the transformations and generators employed are themselves correct. This paper shows how formal techniques can be used to establish the correctness of model transformations used in the generation of software components from precise object models. The source language is based upon existing, formal techniques; the target language is the widely-used SQL notation for database programming. Correctness is established by giving comparable, relational semantics to both languages, and checking that the transformations are semantics-preserving.

  18. Skull base tumor model.

    Science.gov (United States)

    Gragnaniello, Cristian; Nader, Remi; van Doormaal, Tristan; Kamel, Mahmoud; Voormolen, Eduard H J; Lasio, Giovanni; Aboud, Emad; Regli, Luca; Tulleken, Cornelius A F; Al-Mefty, Ossama

    2010-11-01

    Resident duty-hours restrictions have now been instituted in many countries worldwide. Shortened training times and increased public scrutiny of surgical competency have led to a move away from the traditional apprenticeship model of training. The development of educational models for brain anatomy is a fascinating innovation allowing neurosurgeons to train without the need to practice on real patients and it may be a solution to achieve competency within a shortened training period. The authors describe the use of Stratathane resin ST-504 polymer (SRSP), which is inserted at different intracranial locations to closely mimic meningiomas and other pathological entities of the skull base, in a cadaveric model, for use in neurosurgical training. Silicone-injected and pressurized cadaveric heads were used for studying the SRSP model. The SRSP presents unique intrinsic metamorphic characteristics: liquid at first, it expands and foams when injected into the desired area of the brain, forming a solid tumorlike structure. The authors injected SRSP via different passages that did not influence routes used for the surgical approach for resection of the simulated lesion. For example, SRSP injection routes included endonasal transsphenoidal or transoral approaches if lesions were to be removed through standard skull base approach, or, alternatively, SRSP was injected via a cranial approach if the removal was planned to be via the transsphenoidal or transoral route. The model was set in place in 3 countries (US, Italy, and The Netherlands), and a pool of 13 physicians from 4 different institutions (all surgeons and surgeons in training) participated in evaluating it and provided feedback. All 13 evaluating physicians had overall positive impressions of the model. The overall score on 9 components evaluated--including comparison between the tumor model and real tumor cases, perioperative requirements, general impression, and applicability--was 88% (100% being the best possible

  19. Component-Based Software Engineering and Runtime Type Definition

    OpenAIRE

    A. R. Shakurov

    2011-01-01

    The component-based approach to software engineering, its current implementations and their limitations are discussed. A new extended architecture for such systems is presented. Its main architectural concepts and principles are considered.

  20. Banking Crisis Early Warning Model based on a Bayesian Model Averaging Approach

    Directory of Open Access Journals (Sweden)

    Taha Zaghdoudi

    2016-08-01

    Full Text Available The succession of banking crises in which most have resulted in huge economic and financial losses, prompted several authors to study their determinants. These authors constructed early warning models to prevent their occurring. It is in this same vein as our study takes its inspiration. In particular, we have developed a warning model of banking crises based on a Bayesian approach. The results of this approach have allowed us to identify the involvement of the decline in bank profitability, deterioration of the competitiveness of the traditional intermediation, banking concentration and higher real interest rates in triggering bank crisis.

  1. Activity Recognition Using A Combination of Category Components And Local Models for Video Surveillance

    OpenAIRE

    Lin, Weiyao; Sun, Ming-Ting; Poovendran, Radha; Zhang, Zhengyou

    2015-01-01

    This paper presents a novel approach for automatic recognition of human activities for video surveillance applications. We propose to represent an activity by a combination of category components, and demonstrate that this approach offers flexibility to add new activities to the system and an ability to deal with the problem of building models for activities lacking training data. For improving the recognition accuracy, a Confident-Frame- based Recognition algorithm is also proposed, where th...

  2. An aniso tropic brane world cosmological model with the bulk-based approach

    International Nuclear Information System (INIS)

    Uluyazi, G.

    2010-01-01

    To investigate brane world models there are two approaches; brane-based or bulk based. In the brane-based approach, the brane is chosen to be fixed on a coordinate system, where as in the bulk-based approach it is no longer static as it moves along the extra dimension. At first attempt, it is aimed to solve five dimensional field equations in the bulk, then limitation of Weyl Curvature describing geometrical anisotropy is analyzed.

  3. Mixed model approaches for diallel analysis based on a bio-model.

    Science.gov (United States)

    Zhu, J; Weir, B S

    1996-12-01

    A MINQUE(1) procedure, which is minimum norm quadratic unbiased estimation (MINQUE) method with 1 for all the prior values, is suggested for estimating variance and covariance components in a bio-model for diallel crosses. Unbiasedness and efficiency of estimation were compared for MINQUE(1), restricted maximum likelihood (REML) and MINQUE theta which has parameter values for the prior values. MINQUE(1) is almost as efficient as MINQUE theta for unbiased estimation of genetic variance and covariance components. The bio-model is efficient and robust for estimating variance and covariance components for maternal and paternal effects as well as for nuclear effects. A procedure of adjusted unbiased prediction (AUP) is proposed for predicting random genetic effects in the bio-model. The jack-knife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects. Worked examples are given for estimation of variance and covariance components and for prediction of genetic merits.

  4. An approach for activity-based DEVS model specification

    DEFF Research Database (Denmark)

    Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram

    2016-01-01

    Creation of DEVS models has been advanced through Model Driven Architecture and its frameworks. The overarching role of the frameworks has been to help develop model specifications in a disciplined fashion. Frameworks can provide intermediary layers between the higher level mathematical models...... and their corresponding software specifications from both structural and behavioral aspects. Unlike structural modeling, developing models to specify behavior of systems is known to be harder and more complex, particularly when operations with non-trivial control schemes are required. In this paper, we propose specifying...... activity-based behavior modeling of parallel DEVS atomic models. We consider UML activities and actions as fundamental units of behavior modeling, especially in the presence of recent advances in the UML 2.5 specifications. We describe in detail how to approach activity modeling with a set of elemental...

  5. Integrated Systems-Based Approach for Reaching Acceptable End Points for Groundwater - 13629

    International Nuclear Information System (INIS)

    Lee, M. Hope; Wellman, Dawn; Truex, Mike; Freshley, Mark D.; Sorenson, Kent S. Jr.; Wymore, Ryan

    2013-01-01

    The sheer mass and nature of contaminated materials at DOE and DoD sites, makes it impractical to completely restore these sites to pre-disposal conditions. DOE faces long-term challenges, particularly with developing monitoring and end state approaches for clean-up that are protective of the environment, technically based and documented, sustainable, and most importantly cost effective. Integrated systems-based monitoring approaches (e.g., tools for characterization and monitoring, multi-component strategies, geophysical modeling) could provide novel approaches and a framework to (a) define risk-informed endpoints and/or conditions that constitute completion of cleanup and (b) provide the understanding for implementation of advanced scientific approaches to meet cleanup goals. Multi-component strategies which combine site conceptual models, biological, chemical, and physical remediation strategies, as well as iterative review and optimization have proven successful at several DOE sites. Novel tools such as enzyme probes and quantitative PCR for DNA and RNA, and innovative modeling approaches for complex subsurface environments, have been successful at facilitating the reduced operation or shutdown of pump and treat facilities and transition of clean-up activities into monitored natural attenuation remedies. Integrating novel tools with site conceptual models and other lines of evidence to characterize, optimize, and monitor long term remedial approaches for complex contaminant plumes are critical for transitioning active remediation into cost effective, yet technically defensible endpoint strategies. (authors)

  6. Dynamic analysis of large structures with uncertain parameters based on coupling component mode synthesis and perturbation method

    Directory of Open Access Journals (Sweden)

    D. Sarsri

    2016-03-01

    Full Text Available This paper presents a methodological approach to compute the stochastic eigenmodes of large FE models with parameter uncertainties based on coupling of second order perturbation method and component mode synthesis methods. Various component mode synthesis methods are used to optimally reduce the size of the model. The statistical first two moments of dynamic response of the reduced system are obtained by the second order perturbation method. Numerical results illustrating the accuracy and efficiency of the proposed coupled methodological procedures for large FE models with uncertain parameters are presented.

  7. Failure Predictions for VHTR Core Components using a Probabilistic Contiuum Damage Mechanics Model

    Energy Technology Data Exchange (ETDEWEB)

    Fok, Alex

    2013-10-30

    The proposed work addresses the key research need for the development of constitutive models and overall failure models for graphite and high temperature structural materials, with the long-term goal being to maximize the design life of the Next Generation Nuclear Plant (NGNP). To this end, the capability of a Continuum Damage Mechanics (CDM) model, which has been used successfully for modeling fracture of virgin graphite, will be extended as a predictive and design tool for the core components of the very high- temperature reactor (VHTR). Specifically, irradiation and environmental effects pertinent to the VHTR will be incorporated into the model to allow fracture of graphite and ceramic components under in-reactor conditions to be modeled explicitly using the finite element method. The model uses a combined stress-based and fracture mechanics-based failure criterion, so it can simulate both the initiation and propagation of cracks. Modern imaging techniques, such as x-ray computed tomography and digital image correlation, will be used during material testing to help define the baseline material damage parameters. Monte Carlo analysis will be performed to address inherent variations in material properties, the aim being to reduce the arbitrariness and uncertainties associated with the current statistical approach. The results can potentially contribute to the current development of American Society of Mechanical Engineers (ASME) codes for the design and construction of VHTR core components.

  8. An Approach to Enforcing Clark-Wilson Model in Role-based Access Control Model

    Institute of Scientific and Technical Information of China (English)

    LIANGBin; SHIWenchang; SUNYufang; SUNBo

    2004-01-01

    Using one security model to enforce another is a prospective solution to multi-policy support. In this paper, an approach to the enforcing Clark-Wilson data integrity model in the Role-based access control (RBAC) model is proposed. An enforcement construction with great feasibility is presented. In this construction, a direct way to enforce the Clark-Wilson model is provided, the corresponding relations among users, transformation procedures, and constrained data items are strengthened; the concepts of task and subtask are introduced to enhance the support to least-privilege. The proposed approach widens the applicability of RBAC. The theoretical foundation for adopting Clark-Wilson model in a RBAC system with small cost is offered to meet the requirements of multi-policy support and policy flexibility.

  9. Physiology-based modelling approaches to characterize fish habitat suitability: Their usefulness and limitations

    Science.gov (United States)

    Teal, Lorna R.; Marras, Stefano; Peck, Myron A.; Domenici, Paolo

    2018-02-01

    Models are useful tools for predicting the impact of global change on species distribution and abundance. As ectotherms, fish are being challenged to adapt or track changes in their environment, either in time through a phenological shift or in space by a biogeographic shift. Past modelling efforts have largely been based on correlative Species Distribution Models, which use known occurrences of species across landscapes of interest to define sets of conditions under which species are likely to maintain populations. The practical advantages of this correlative approach are its simplicity and the flexibility in terms of data requirements. However, effective conservation management requires models that make projections beyond the range of available data. One way to deal with such an extrapolation is to use a mechanistic approach based on physiological processes underlying climate change effects on organisms. Here we illustrate two approaches for developing physiology-based models to characterize fish habitat suitability. (i) Aerobic Scope Models (ASM) are based on the relationship between environmental factors and aerobic scope (defined as the difference between maximum and standard (basal) metabolism). This approach is based on experimental data collected by using a number of treatments that allow a function to be derived to predict aerobic metabolic scope from the stressor/environmental factor(s). This function is then integrated with environmental (oceanographic) data of current and future scenarios. For any given species, this approach allows habitat suitability maps to be generated at various spatiotemporal scales. The strength of the ASM approach relies on the estimate of relative performance when comparing, for example, different locations or different species. (ii) Dynamic Energy Budget (DEB) models are based on first principles including the idea that metabolism is organised in the same way within all animals. The (standard) DEB model aims to describe

  10. Components of Attention in Grapheme-Color Synesthesia: A Modeling Approach.

    Science.gov (United States)

    Ásgeirsson, Árni Gunnar; Nordfang, Maria; Sørensen, Thomas Alrik

    2015-01-01

    Grapheme-color synesthesia is a condition where the perception of graphemes consistently and automatically evokes an experience of non-physical color. Many have studied how synesthesia affects the processing of achromatic graphemes, but less is known about the synesthetic processing of physically colored graphemes. Here, we investigated how the visual processing of colored letters is affected by the congruence or incongruence of synesthetic grapheme-color associations. We briefly presented graphemes (10-150 ms) to 9 grapheme-color synesthetes and to 9 control observers. Their task was to report as many letters (targets) as possible, while ignoring digit (distractors). Graphemes were either congruently or incongruently colored with the synesthetes' reported grapheme-color association. A mathematical model, based on Bundesen's (1990) Theory of Visual Attention (TVA), was fitted to each observer's data, allowing us to estimate discrete components of visual attention. The models suggested that the synesthetes processed congruent letters faster than incongruent ones, and that they were able to retain more congruent letters in visual short-term memory, while the control group's model parameters were not significantly affected by congruence. The increase in processing speed, when synesthetes process congruent letters, suggests that synesthesia affects the processing of letters at a perceptual level. To account for the benefit in processing speed, we propose that synesthetic associations become integrated into the categories of graphemes, and that letter colors are considered as evidence for making certain perceptual categorizations in the visual system. We also propose that enhanced visual short-term memory capacity for congruently colored graphemes can be explained by the synesthetes' expertise regarding their specific grapheme-color associations.

  11. Components of Attention in Grapheme-Color Synesthesia: A Modeling Approach

    Science.gov (United States)

    Ásgeirsson, Árni Gunnar; Nordfang, Maria; Sørensen, Thomas Alrik

    2015-01-01

    Grapheme-color synesthesia is a condition where the perception of graphemes consistently and automatically evokes an experience of non-physical color. Many have studied how synesthesia affects the processing of achromatic graphemes, but less is known about the synesthetic processing of physically colored graphemes. Here, we investigated how the visual processing of colored letters is affected by the congruence or incongruence of synesthetic grapheme-color associations. We briefly presented graphemes (10–150 ms) to 9 grapheme-color synesthetes and to 9 control observers. Their task was to report as many letters (targets) as possible, while ignoring digit (distractors). Graphemes were either congruently or incongruently colored with the synesthetes’ reported grapheme-color association. A mathematical model, based on Bundesen’s (1990) Theory of Visual Attention (TVA), was fitted to each observer’s data, allowing us to estimate discrete components of visual attention. The models suggested that the synesthetes processed congruent letters faster than incongruent ones, and that they were able to retain more congruent letters in visual short-term memory, while the control group’s model parameters were not significantly affected by congruence. The increase in processing speed, when synesthetes process congruent letters, suggests that synesthesia affects the processing of letters at a perceptual level. To account for the benefit in processing speed, we propose that synesthetic associations become integrated into the categories of graphemes, and that letter colors are considered as evidence for making certain perceptual categorizations in the visual system. We also propose that enhanced visual short-term memory capacity for congruently colored graphemes can be explained by the synesthetes’ expertise regarding their specific grapheme-color associations. PMID:26252019

  12. Components of Attention in Grapheme-Color Synesthesia: A Modeling Approach.

    Directory of Open Access Journals (Sweden)

    Árni Gunnar Ásgeirsson

    Full Text Available Grapheme-color synesthesia is a condition where the perception of graphemes consistently and automatically evokes an experience of non-physical color. Many have studied how synesthesia affects the processing of achromatic graphemes, but less is known about the synesthetic processing of physically colored graphemes. Here, we investigated how the visual processing of colored letters is affected by the congruence or incongruence of synesthetic grapheme-color associations. We briefly presented graphemes (10-150 ms to 9 grapheme-color synesthetes and to 9 control observers. Their task was to report as many letters (targets as possible, while ignoring digit (distractors. Graphemes were either congruently or incongruently colored with the synesthetes' reported grapheme-color association. A mathematical model, based on Bundesen's (1990 Theory of Visual Attention (TVA, was fitted to each observer's data, allowing us to estimate discrete components of visual attention. The models suggested that the synesthetes processed congruent letters faster than incongruent ones, and that they were able to retain more congruent letters in visual short-term memory, while the control group's model parameters were not significantly affected by congruence. The increase in processing speed, when synesthetes process congruent letters, suggests that synesthesia affects the processing of letters at a perceptual level. To account for the benefit in processing speed, we propose that synesthetic associations become integrated into the categories of graphemes, and that letter colors are considered as evidence for making certain perceptual categorizations in the visual system. We also propose that enhanced visual short-term memory capacity for congruently colored graphemes can be explained by the synesthetes' expertise regarding their specific grapheme-color associations.

  13. Surrogate based approaches to parameter inference in ocean models

    KAUST Repository

    Knio, Omar

    2016-01-06

    This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.

  14. Surrogate based approaches to parameter inference in ocean models

    KAUST Repository

    Knio, Omar

    2016-01-01

    This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.

  15. How Many Separable Sources? Model Selection In Independent Components Analysis

    Science.gov (United States)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  16. Statistical analysis tolerance using jacobian torsor model based on uncertainty propagation method

    Directory of Open Access Journals (Sweden)

    W Ghie

    2016-04-01

    Full Text Available One risk inherent in the use of assembly components is that the behaviourof these components is discovered only at the moment an assembly isbeing carried out. The objective of our work is to enable designers to useknown component tolerances as parameters in models that can be usedto predict properties at the assembly level. In this paper we present astatistical approach to assemblability evaluation, based on tolerance andclearance propagations. This new statistical analysis method for toleranceis based on the Jacobian-Torsor model and the uncertainty measurementapproach. We show how this can be accomplished by modeling thedistribution of manufactured dimensions through applying a probabilitydensity function. By presenting an example we show how statisticaltolerance analysis should be used in the Jacobian-Torsor model. This workis supported by previous efforts aimed at developing a new generation ofcomputational tools for tolerance analysis and synthesis, using theJacobian-Torsor approach. This approach is illustrated on a simple threepartassembly, demonstrating the method’s capability in handling threedimensionalgeometry.

  17. Agent-based modeling: a new approach for theory building in social psychology.

    Science.gov (United States)

    Smith, Eliot R; Conrey, Frederica R

    2007-02-01

    Most social and psychological phenomena occur not as the result of isolated decisions by individuals but rather as the result of repeated interactions between multiple individuals over time. Yet the theory-building and modeling techniques most commonly used in social psychology are less than ideal for understanding such dynamic and interactive processes. This article describes an alternative approach to theory building, agent-based modeling (ABM), which involves simulation of large numbers of autonomous agents that interact with each other and with a simulated environment and the observation of emergent patterns from their interactions. The authors believe that the ABM approach is better able than prevailing approaches in the field, variable-based modeling (VBM) techniques such as causal modeling, to capture types of complex, dynamic, interactive processes so important in the social world. The article elaborates several important contrasts between ABM and VBM and offers specific recommendations for learning more and applying the ABM approach.

  18. Polynomial fuzzy model-based approach for underactuated surface vessels

    DEFF Research Database (Denmark)

    Khooban, Mohammad Hassan; Vafamand, Navid; Dragicevic, Tomislav

    2018-01-01

    The main goal of this study is to introduce a new polynomial fuzzy model-based structure for a class of marine systems with non-linear and polynomial dynamics. The suggested technique relies on a polynomial Takagi–Sugeno (T–S) fuzzy modelling, a polynomial dynamic parallel distributed compensation...... surface vessel (USV). Additionally, in order to overcome the USV control challenges, including the USV un-modelled dynamics, complex nonlinear dynamics, external disturbances and parameter uncertainties, the polynomial fuzzy model representation is adopted. Moreover, the USV-based control structure...... and a sum-of-squares (SOS) decomposition. The new proposed approach is a generalisation of the standard T–S fuzzy models and linear matrix inequality which indicated its effectiveness in decreasing the tracking time and increasing the efficiency of the robust tracking control problem for an underactuated...

  19. Spatially-Distributed Stream Flow and Nutrient Dynamics Simulations Using the Component-Based AgroEcoSystem-Watershed (AgES-W) Model

    Science.gov (United States)

    Ascough, J. C.; David, O.; Heathman, G. C.; Smith, D. R.; Green, T. R.; Krause, P.; Kipka, H.; Fink, M.

    2010-12-01

    The Object Modeling System 3 (OMS3), currently being developed by the USDA-ARS Agricultural Systems Research Unit and Colorado State University (Fort Collins, CO), provides a component-based environmental modeling framework which allows the implementation of single- or multi-process modules that can be developed and applied as custom-tailored model configurations. OMS3 as a “lightweight” modeling framework contains four primary foundations: modeling resources (e.g., components) annotated with modeling metadata; domain specific knowledge bases and ontologies; tools for calibration, sensitivity analysis, and model optimization; and methods for model integration and performance scalability. The core is able to manage modeling resources and development tools for model and simulation creation, execution, evaluation, and documentation. OMS3 is based on the Java platform but is highly interoperable with C, C++, and FORTRAN on all major operating systems and architectures. The ARS Conservation Effects Assessment Project (CEAP) Watershed Assessment Study (WAS) Project Plan provides detailed descriptions of ongoing research studies at 14 benchmark watersheds in the United States. In order to satisfy the requirements of CEAP WAS Objective 5 (“develop and verify regional watershed models that quantify environmental outcomes of conservation practices in major agricultural regions”), a new watershed model development approach was initiated to take advantage of OMS3 modeling framework capabilities. Specific objectives of this study were to: 1) disaggregate and refactor various agroecosystem models (e.g., J2K-S, SWAT, WEPP) and implement hydrological, N dynamics, and crop growth science components under OMS3, 2) assemble a new modular watershed scale model for fully-distributed transfer of water and N loading between land units and stream channels, and 3) evaluate the accuracy and applicability of the modular watershed model for estimating stream flow and N dynamics. The

  20. Robustness of Component Models in Energy System Simulators

    DEFF Research Database (Denmark)

    Elmegaard, Brian

    2003-01-01

    During the development of the component-based energy system simulator DNA (Dynamic Network Analysis), several obstacles to easy use of the program have been observed. Some of these have to do with the nature of the program being based on a modelling language, not a graphical user interface (GUI......). Others have to do with the interaction between models of the nature of the substances in an energy system (e.g., fuels, air, flue gas), models of the components in a system (e.g., heat exchangers, turbines, pumps), and the solver for the system of equations. This paper proposes that the interaction...

  1. NASA JPL Distributed Systems Technology (DST) Object-Oriented Component Approach for Software Inter-Operability and Reuse

    Science.gov (United States)

    Hall, Laverne; Hung, Chaw-Kwei; Lin, Imin

    2000-01-01

    The purpose of this paper is to provide a description of NASA JPL Distributed Systems Technology (DST) Section's object-oriented component approach to open inter-operable systems software development and software reuse. It will address what is meant by the terminology object component software, give an overview of the component-based development approach and how it relates to infrastructure support of software architectures and promotes reuse, enumerate on the benefits of this approach, and give examples of application prototypes demonstrating its usage and advantages. Utilization of the object-oriented component technology approach for system development and software reuse will apply to several areas within JPL, and possibly across other NASA Centers.

  2. Neural networks and principle component analysis approaches to predict pile capacity in sand

    Directory of Open Access Journals (Sweden)

    Benali A

    2018-01-01

    Full Text Available Determination of pile bearing capacity from the in-situ tests has developed considerably due to the significant development of their technology. The project presented in this paper is a combination of two approaches, artificial neural networks and main component analyses that allow the development of a neural network model that provides a more accurate prediction of axial load bearing capacity based on the SPT test data. The retropropagation multi-layer perceptron with Bayesian regularization (RB was used in this model. This was established by the incorporation of about 260 data, obtained from the published literature, of experimental programs for large displacement driven piles. The PCA method is proposed for compression and suppression of the correlation between these data. This will improve the performance of generalization of the model.

  3. Nuclear fuel cycle system simulation tool based on high-fidelity component modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ames, David E.,

    2014-02-01

    The DOE is currently directing extensive research into developing fuel cycle technologies that will enable the safe, secure, economic, and sustainable expansion of nuclear energy. The task is formidable considering the numerous fuel cycle options, the large dynamic systems that each represent, and the necessity to accurately predict their behavior. The path to successfully develop and implement an advanced fuel cycle is highly dependent on the modeling capabilities and simulation tools available for performing useful relevant analysis to assist stakeholders in decision making. Therefore a high-fidelity fuel cycle simulation tool that performs system analysis, including uncertainty quantification and optimization was developed. The resulting simulator also includes the capability to calculate environmental impact measures for individual components and the system. An integrated system method and analysis approach that provides consistent and comprehensive evaluations of advanced fuel cycles was developed. A general approach was utilized allowing for the system to be modified in order to provide analysis for other systems with similar attributes. By utilizing this approach, the framework for simulating many different fuel cycle options is provided. Two example fuel cycle configurations were developed to take advantage of used fuel recycling and transmutation capabilities in waste management scenarios leading to minimized waste inventories.

  4. A review of single-sample-based models and other approaches for radiocarbon dating of dissolved inorganic carbon in groundwater

    Science.gov (United States)

    Han, L. F; Plummer, Niel

    2016-01-01

    13C values.In contrast to the single-sample-based models, the extended Gonfiantini & Zuppi model (Gonfiantini and Zuppi, 2003; Han et al., 2014) is a statistical approach. This approach can be used to estimate 14C ages when a curved relationship between the 14C and 13C values of the DIC data is observed. In addition to estimation of groundwater ages, the relationship between 14C and δ13C data can be used to interpret hydrogeological characteristics of the aquifer, e.g. estimating apparent rates of geochemical reactions and revealing the complexity of the geochemical environment, and identify samples that are not affected by the same set of reactions/processes as the rest of the dataset. The investigated water samples may have a wide range of ages, and for waters with very low values of 14C, the model based on statistics may give more reliable age estimates than those obtained from single-sample-based models. In the extended Gonfiantini & Zuppi model, a representative system-wide value of the initial 14C content is derived from the 14C and δ13C data of DIC and can differ from that used in single-sample-based models. Therefore, the extended Gonfiantini & Zuppi model usually avoids the effect of modern water components which might retain ‘bomb’ pulse signatures.The geochemical mass-balance approach constructs an adjustment model that accounts for all the geochemical reactions known to occur along an aquifer flow path (Plummer et al., 1983; Wigley et al., 1978; Plummer et al., 1994; Plummer and Glynn, 2013), and includes, in addition to DIC, dissolved organic carbon (DOC) and methane (CH4). If sufficient chemical, mineralogical and isotopic data are available, the geochemical mass-balance method can yield the most accurate estimates of the adjusted radiocarbon age. The main limitation of this approach is that complete information is necessary on chemical, mineralogical and isotopic data and these data are often limited.Failure to recognize the limitations and

  5. A model-based approach to on-line process disturbance management

    International Nuclear Information System (INIS)

    Kim, I.S.

    1988-01-01

    The methodology developed can be applied to the design of a real-time expert system to aid control-room operators in coping with process abnormalities. The approach encompasses diverse functional aspects required for an effective on-line process disturbance management: (1) intelligent process monitoring and alarming, (2) on-line sensor data validation, (3) on-line sensor and hardware (except sensors) fault diagnosis, and (4) real-time corrective measure synthesis. Accomplishment of these functions is made possible through the application of various models, goal-tree success-tree, process monitor-tree, sensor failure diagnosis, and hardware failure diagnosis models. The models used in the methodology facilitate not only the knowledge-acquisition process - a bottleneck in the development of an expert system - but also the reasoning process of the knowledge-based system. These transparent models and model-based reasoning significantly enhance the maintainability of the real-time expert systems. The proposed approach was applied to the feedwater control system of a nuclear power plant, and implemented into a real-time expert system, MOAS II, using the expert system shell, PICON, on the LMI machine

  6. A mixture model-based approach to the clustering of microarray expression data.

    Science.gov (United States)

    McLachlan, G J; Bean, R W; Peel, D

    2002-03-01

    This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets. EMMIX-GENE is available at http://www.maths.uq.edu.au/~gjm/emmix-gene/

  7. A rule-based approach to model checking of UML state machines

    Science.gov (United States)

    Grobelna, Iwona; Grobelny, Michał; Stefanowicz, Łukasz

    2016-12-01

    In the paper a new approach to formal verification of control process specification expressed by means of UML state machines in version 2.x is proposed. In contrast to other approaches from the literature, we use the abstract and universal rule-based logical model suitable both for model checking (using the nuXmv model checker), but also for logical synthesis in form of rapid prototyping. Hence, a prototype implementation in hardware description language VHDL can be obtained that fully reflects the primary, already formally verified specification in form of UML state machines. Presented approach allows to increase the assurance that implemented system meets the user-defined requirements.

  8. Evaluating fugacity models for trace components in landfill gas

    Energy Technology Data Exchange (ETDEWEB)

    Shafi, Sophie [Integrated Waste Management Centre, Sustainable Systems Department, Building 61, School of Industrial and Manufacturing Science, Cranfield University, Cranfield, Bedfordshire MK43 0AL (United Kingdom); Sweetman, Andrew [Department of Environmental Science, Lancaster University, Lancaster LA1 4YQ (United Kingdom); Hough, Rupert L. [Integrated Waste Management Centre, Sustainable Systems Department, Building 61, School of Industrial and Manufacturing Science, Cranfield University, Cranfield, Bedfordshire MK43 0AL (United Kingdom); Smith, Richard [Integrated Waste Management Centre, Sustainable Systems Department, Building 61, School of Industrial and Manufacturing Science, Cranfield University, Cranfield, Bedfordshire MK43 0AL (United Kingdom); Rosevear, Alan [Science Group - Waste and Remediation, Environment Agency, Reading RG1 8DQ (United Kingdom); Pollard, Simon J.T. [Integrated Waste Management Centre, Sustainable Systems Department, Building 61, School of Industrial and Manufacturing Science, Cranfield University, Cranfield, Bedfordshire MK43 0AL (United Kingdom)]. E-mail: s.pollard@cranfield.ac.uk

    2006-12-15

    A fugacity approach was evaluated to reconcile loadings of vinyl chloride (chloroethene), benzene, 1,3-butadiene and trichloroethylene in waste with concentrations observed in landfill gas monitoring studies. An evaluative environment derived from fictitious but realistic properties such as volume, composition, and temperature, constructed with data from the Brogborough landfill (UK) test cells was used to test a fugacity approach to generating the source term for use in landfill gas risk assessment models (e.g. GasSim). SOILVE, a dynamic Level II model adapted here for landfills, showed greatest utility for benzene and 1,3-butadiene, modelled under anaerobic conditions over a 10 year simulation. Modelled concentrations of these components (95 300 {mu}g m{sup -3}; 43 {mu}g m{sup -3}) fell within measured ranges observed in gas from landfills (24 300-180 000 {mu}g m{sup -3}; 20-70 {mu}g m{sup -3}). This study highlights the need (i) for representative and time-referenced biotransformation data; (ii) to evaluate the partitioning characteristics of organic matter within waste systems and (iii) for a better understanding of the role that gas extraction rate (flux) plays in producing trace component concentrations in landfill gas. - Fugacity for trace component in landfill gas.

  9. Consequence Based Design. An approach for integrating computational collaborative models (Integrated Dynamic Models) in the building design phase

    DEFF Research Database (Denmark)

    Negendahl, Kristoffer

    relies on various advancements in the area of integrated dynamic models. It also relies on the application and test of the approach in practice to evaluate the Consequence based design and the use of integrated dynamic models. As a result, the Consequence based design approach has been applied in five...... and define new ways to implement integrated dynamic models for the following project. In parallel, seven different developments of new methods, tools and algorithms have been performed to support the application of the approach. The developments concern: Decision diagrams – to clarify goals and the ability...... affect the design process and collaboration between building designers and simulationists. Within the limits of applying the approach of Consequence based design to five case studies, followed by documentation based on interviews, surveys and project related documentations derived from internal reports...

  10. Fetal ECG extraction using independent component analysis by Jade approach

    Science.gov (United States)

    Giraldo-Guzmán, Jader; Contreras-Ortiz, Sonia H.; Lasprilla, Gloria Isabel Bautista; Kotas, Marian

    2017-11-01

    Fetal ECG monitoring is a useful method to assess the fetus health and detect abnormal conditions. In this paper we propose an approach to extract fetal ECG from abdomen and chest signals using independent component analysis based on the joint approximate diagonalization of eigenmatrices approach. The JADE approach avoids redundancy, what reduces matrix dimension and computational costs. Signals were filtered with a high pass filter to eliminate low frequency noise. Several levels of decomposition were tested until the fetal ECG was recognized in one of the separated sources output. The proposed method shows fast and good performance.

  11. Numeric, Agent-based or System dynamics model? Which modeling approach is the best for vast population simulation?

    Science.gov (United States)

    Cimler, Richard; Tomaskova, Hana; Kuhnova, Jitka; Dolezal, Ondrej; Pscheidl, Pavel; Kuca, Kamil

    2018-02-01

    Alzheimer's disease is one of the most common mental illnesses. It is posited that more than 25 % of the population is affected by some mental disease during their lifetime. Treatment of each patient draws resources from the economy concerned. Therefore, it is important to quantify the potential economic impact. Agent-based, system dynamics and numerical approaches to dynamic modeling of the population of the European Union and its patients with Alzheimer's disease are presented in this article. Simulations, their characteristics, and the results from different modeling tools are compared. The results of these approaches are compared with EU population growth predictions from the statistical office of the EU by Eurostat. The methodology of a creation of the models is described and all three modeling approaches are compared. The suitability of each modeling approach for the population modeling is discussed. In this case study, all three approaches gave us the results corresponding with the EU population prediction. Moreover, we were able to predict the number of patients with AD and, based on the modeling method, we were also able to monitor different characteristics of the population. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  12. A physical based equivalent circuit modeling approach for ballasted InP DHBT multi-finger devices at millimeter-wave frequencies

    DEFF Research Database (Denmark)

    Midili, Virginio; Squartecchia, Michele; Johansen, Tom Keinicke

    2016-01-01

    equivalent circuit description. In the first approach, the EM simulations of contact pads and ballasting network are combined with the small-signal model of the intrinsic device. In the second approach, the ballasting network is modeled with lumped components derived from physical analysis of the layout...

  13. From scores to face templates: a model-based approach.

    Science.gov (United States)

    Mohanty, Pranab; Sarkar, Sudeep; Kasturi, Rangachar

    2007-12-01

    Regeneration of templates from match scores has security and privacy implications related to any biometric authentication system. We propose a novel paradigm to reconstruct face templates from match scores using a linear approach. It proceeds by first modeling the behavior of the given face recognition algorithm by an affine transformation. The goal of the modeling is to approximate the distances computed by a face recognition algorithm between two faces by distances between points, representing these faces, in an affine space. Given this space, templates from an independent image set (break-in) are matched only once with the enrolled template of the targeted subject and match scores are recorded. These scores are then used to embed the targeted subject in the approximating affine (non-orthogonal) space. Given the coordinates of the targeted subject in the affine space, the original template of the targeted subject is reconstructed using the inverse of the affine transformation. We demonstrate our ideas using three, fundamentally different, face recognition algorithms: Principal Component Analysis (PCA) with Mahalanobis cosine distance measure, Bayesian intra-extrapersonal classifier (BIC), and a feature-based commercial algorithm. To demonstrate the independence of the break-in set with the gallery set, we select face templates from two different databases: Face Recognition Grand Challenge (FRGC) and Facial Recognition Technology (FERET) Database (FERET). With an operational point set at 1 percent False Acceptance Rate (FAR) and 99 percent True Acceptance Rate (TAR) for 1,196 enrollments (FERET gallery), we show that at most 600 attempts (score computations) are required to achieve a 73 percent chance of breaking in as a randomly chosen target subject for the commercial face recognition system. With similar operational set up, we achieve a 72 percent and 100 percent chance of breaking in for the Bayesian and PCA based face recognition systems, respectively. With

  14. Multirule Based Diagnostic Approach for the Fog Predictions Using WRF Modelling Tool

    Directory of Open Access Journals (Sweden)

    Swagata Payra

    2014-01-01

    Full Text Available The prediction of fog onset remains difficult despite the progress in numerical weather prediction. It is a complex process and requires adequate representation of the local perturbations in weather prediction models. It mainly depends upon microphysical and mesoscale processes that act within the boundary layer. This study utilizes a multirule based diagnostic (MRD approach using postprocessing of the model simulations for fog predictions. The empiricism involved in this approach is mainly to bridge the gap between mesoscale and microscale variables, which are related to mechanism of the fog formation. Fog occurrence is a common phenomenon during winter season over Delhi, India, with the passage of the western disturbances across northwestern part of the country accompanied with significant amount of moisture. This study implements the above cited approach for the prediction of occurrences of fog and its onset time over Delhi. For this purpose, a high resolution weather research and forecasting (WRF model is used for fog simulations. The study involves depiction of model validation and postprocessing of the model simulations for MRD approach and its subsequent application to fog predictions. Through this approach model identified foggy and nonfoggy days successfully 94% of the time. Further, the onset of fog events is well captured within an accuracy of 30–90 minutes. This study demonstrates that the multirule based postprocessing approach is a useful and highly promising tool in improving the fog predictions.

  15. Service oriented architecture assessment based on software components

    Directory of Open Access Journals (Sweden)

    Mahnaz Amirpour

    2016-01-01

    Full Text Available Enterprise architecture, with detailed descriptions of the functions of information technology in the organization, tries to reduce the complexity of technology applications resulting in tools with greater efficiency in achieving the objectives of the organization. Enterprise architecture consists of a set of models describing this technology in different components performance as well as various aspects of the applications in any organization. Therefore, information technology development and maintenance management can perform well within organizations. This study aims to suggest a method to identify different types of services in service-oriented architecture analysis step that applies some previous approaches in an integrated form and, based on the principles of software engineering, to provide a simpler and more transparent approach through the expression of analysis details. Advantages and disadvantages of proposals should be evaluated before the implementation and costs allocation. Evaluation methods can better identify strengths and weaknesses of the current situation apart from selecting appropriate model out of several suggestions, and clarify this technology development solution for organizations in the future. We will be able to simulate data and processes flow within the organization by converting the output of the model to colored Petri nets and evaluate and test it by examining various inputs to enterprise architecture before implemented in terms of reliability and response time. A model of application has been studied for the proposed model and the results can describe and design architecture for data.

  16. Component-based development process and component lifecycle

    NARCIS (Netherlands)

    Crnkovic, I.; Chaudron, M.R.V.; Larsson, S.

    2006-01-01

    The process of component- and component-based system development differs in many significant ways from the "classical" development process of software systems. The main difference is in the separation of the development process of components from the development process of systems. This fact has a

  17. Real Time Engineering Analysis Based on a Generative Component Implementation

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Klitgaard, Jens

    2007-01-01

    The present paper outlines the idea of a conceptual design tool with real time engineering analysis which can be used in the early conceptual design phase. The tool is based on a parametric approach using Generative Components with embedded structural analysis. Each of these components uses the g...

  18. Non-frontal Model Based Approach to Forensic Face Recognition

    NARCIS (Netherlands)

    Dutta, A.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    2012-01-01

    In this paper, we propose a non-frontal model based approach which ensures that a face recognition system always gets to compare images having similar view (or pose). This requires a virtual suspect reference set that consists of non-frontal suspect images having pose similar to the surveillance

  19. A 3D model retrieval approach based on Bayesian networks lightfield descriptor

    Science.gov (United States)

    Xiao, Qinhan; Li, Yanjun

    2009-12-01

    A new 3D model retrieval methodology is proposed by exploiting a novel Bayesian networks lightfield descriptor (BNLD). There are two key novelties in our approach: (1) a BN-based method for building lightfield descriptor; and (2) a 3D model retrieval scheme based on the proposed BNLD. To overcome the disadvantages of the existing 3D model retrieval methods, we explore BN for building a new lightfield descriptor. Firstly, 3D model is put into lightfield, about 300 binary-views can be obtained along a sphere, then Fourier descriptors and Zernike moments descriptors can be calculated out from binaryviews. Then shape feature sequence would be learned into a BN model based on BN learning algorithm; Secondly, we propose a new 3D model retrieval method by calculating Kullback-Leibler Divergence (KLD) between BNLDs. Beneficial from the statistical learning, our BNLD is noise robustness as compared to the existing methods. The comparison between our method and the lightfield descriptor-based approach is conducted to demonstrate the effectiveness of our proposed methodology.

  20. SLS Model Based Design: A Navigation Perspective

    Science.gov (United States)

    Oliver, T. Emerson; Anzalone, Evan; Park, Thomas; Geohagan, Kevin

    2018-01-01

    The SLS Program has implemented a Model-based Design (MBD) and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team is responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1B design, the additional GPS Receiver hardware model is managed as a DMM at the vehicle design level. This paper describes the models, and discusses the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the navigation components.

  1. Approach and development strategy for an agent-based model of economic confidence.

    Energy Technology Data Exchange (ETDEWEB)

    Sprigg, James A.; Pryor, Richard J.; Jorgensen, Craig Reed

    2004-08-01

    We are extending the existing features of Aspen, a powerful economic modeling tool, and introducing new features to simulate the role of confidence in economic activity. The new model is built from a collection of autonomous agents that represent households, firms, and other relevant entities like financial exchanges and governmental authorities. We simultaneously model several interrelated markets, including those for labor, products, stocks, and bonds. We also model economic tradeoffs, such as decisions of households and firms regarding spending, savings, and investment. In this paper, we review some of the basic principles and model components and describe our approach and development strategy for emulating consumer, investor, and business confidence. The model of confidence is explored within the context of economic disruptions, such as those resulting from disasters or terrorist events.

  2. Parts and Components Reliability Assessment: A Cost Effective Approach

    Science.gov (United States)

    Lee, Lydia

    2009-01-01

    System reliability assessment is a methodology which incorporates reliability analyses performed at parts and components level such as Reliability Prediction, Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) to assess risks, perform design tradeoffs, and therefore, to ensure effective productivity and/or mission success. The system reliability is used to optimize the product design to accommodate today?s mandated budget, manpower, and schedule constraints. Stand ard based reliability assessment is an effective approach consisting of reliability predictions together with other reliability analyses for electronic, electrical, and electro-mechanical (EEE) complex parts and components of large systems based on failure rate estimates published by the United States (U.S.) military or commercial standards and handbooks. Many of these standards are globally accepted and recognized. The reliability assessment is especially useful during the initial stages when the system design is still in the development and hard failure data is not yet available or manufacturers are not contractually obliged by their customers to publish the reliability estimates/predictions for their parts and components. This paper presents a methodology to assess system reliability using parts and components reliability estimates to ensure effective productivity and/or mission success in an efficient manner, low cost, and tight schedule.

  3. Effective modelling of percolation at the landscape scale using data-based approaches

    Science.gov (United States)

    Selle, Benny; Lischeid, Gunnar; Huwe, Bernd

    2008-06-01

    Process-based models have been extensively applied to assess the impact of landuse change on water quantity and quality at landscape scales. However, the routine application of those models suffers from large computational efforts, lack of transparency and the requirement of many input parameters. Data-based models such as Feed-Forward Multilayer Perceptrons (MLP) and Classification and Regression Trees (CART) may be used as effective models, i.e. simple approximations of complex process-based models. These data-based approaches can subsequently be applied for scenario analysis and as a transparent management tool provided climatic boundary conditions and the basic model assumptions of the process-based models do not change dramatically. In this study, we apply MLP, CART and Multiple Linear Regression (LR) to model the spatially distributed and spatially aggregated percolation in soils using weather, groundwater and soil data. The percolation data is obtained via numerical experiments with Hydrus1D. Thus, the complex process-based model is approximated using simpler data-based approaches. The MLP model explains most of the percolation variance in time and space without using any soil information. This reflects the effective dimensionality of the process-based model and suggests that percolation in the study area may be modelled much simpler than using Hydrus1D. The CART model shows that soil properties play a negligible role for percolation under wet climatic conditions. However, they become more important if the conditions turn drier. The LR method does not yield satisfactory predictions for the spatially distributed percolation however the spatially aggregated percolation is well approximated. This may indicate that the soils behave simpler (i.e. more linear) when percolation dynamics are upscaled.

  4. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian....

  5. Maximum Power Point Tracking Control of Photovoltaic Systems: A Polynomial Fuzzy Model-Based Approach

    DEFF Research Database (Denmark)

    Rakhshan, Mohsen; Vafamand, Navid; Khooban, Mohammad Hassan

    2018-01-01

    This paper introduces a polynomial fuzzy model (PFM)-based maximum power point tracking (MPPT) control approach to increase the performance and efficiency of the solar photovoltaic (PV) electricity generation. The proposed method relies on a polynomial fuzzy modeling, a polynomial parallel......, a direct maximum power (DMP)-based control structure is considered for MPPT. Using the PFM representation, the DMP-based control structure is formulated in terms of SOS conditions. Unlike the conventional approaches, the proposed approach does not require exploring the maximum power operational point...

  6. Modelling of robotic work cells using agent based-approach

    Science.gov (United States)

    Sękala, A.; Banaś, W.; Gwiazda, A.; Monica, Z.; Kost, G.; Hryniewicz, P.

    2016-08-01

    In the case of modern manufacturing systems the requirements, both according the scope and according characteristics of technical procedures are dynamically changing. This results in production system organization inability to keep up with changes in a market demand. Accordingly, there is a need for new design methods, characterized, on the one hand with a high efficiency and on the other with the adequate level of the generated organizational solutions. One of the tools that could be used for this purpose is the concept of agent systems. These systems are the tools of artificial intelligence. They allow assigning to agents the proper domains of procedures and knowledge so that they represent in a self-organizing system of an agent environment, components of a real system. The agent-based system for modelling robotic work cell should be designed taking into consideration many limitations considered with the characteristic of this production unit. It is possible to distinguish some grouped of structural components that constitute such a system. This confirms the structural complexity of a work cell as a specific production system. So it is necessary to develop agents depicting various aspects of the work cell structure. The main groups of agents that are used to model a robotic work cell should at least include next pattern representatives: machine tool agents, auxiliary equipment agents, robots agents, transport equipment agents, organizational agents as well as data and knowledge bases agents. In this way it is possible to create the holarchy of the agent-based system.

  7. Component Reification in Systems Modelling

    DEFF Research Database (Denmark)

    Bendisposto, Jens; Hallerstede, Stefan

    When modelling concurrent or distributed systems in Event-B, we often obtain models where the structure of the connected components is specified by constants. Their behaviour is specified by the non-deterministic choice of event parameters for events that operate on shared variables. From a certain......? These components may still refer to shared variables. Events of these components should not refer to the constants specifying the structure. The non-deterministic choice between these components should not be via parameters. We say the components are reified. We need to address how the reified components get...... reflected into the original model. This reflection should indicate the constraints on how to connect the components....

  8. A single grain approach applied to modelling recrystallization kinetics in a single-phase metal

    NARCIS (Netherlands)

    Chen, S.P.; Zwaag, van der S.

    2004-01-01

    A comprehensive model for the recrystallization kinetics is proposed which incorporates both microstructure and the textural components in the deformed state. The model is based on the single-grain approach proposed previously. The influence of the as-deformed grain orientation, which affects the

  9. A molecular systems approach to modelling human skin pigmentation: identifying underlying pathways and critical components.

    Science.gov (United States)

    Raghunath, Arathi; Sambarey, Awanti; Sharma, Neha; Mahadevan, Usha; Chandra, Nagasuma

    2015-04-29

    Ultraviolet radiations (UV) serve as an environmental stress for human skin, and result in melanogenesis, with the pigment melanin having protective effects against UV induced damage. This involves a dynamic and complex regulation of various biological processes that results in the expression of melanin in the outer most layers of the epidermis, where it can exert its protective effect. A comprehensive understanding of the underlying cross talk among different signalling molecules and cell types is only possible through a systems perspective. Increasing incidences of both melanoma and non-melanoma skin cancers necessitate the need to better comprehend UV mediated effects on skin pigmentation at a systems level, so as to ultimately evolve knowledge-based strategies for efficient protection and prevention of skin diseases. A network model for UV-mediated skin pigmentation in the epidermis was constructed and subjected to shortest path analysis. Virtual knock-outs were carried out to identify essential signalling components. We describe a network model for UV-mediated skin pigmentation in the epidermis. The model consists of 265 components (nodes) and 429 directed interactions among them, capturing the manner in which one component influences the other and channels information. Through shortest path analysis, we identify novel signalling pathways relevant to pigmentation. Virtual knock-outs or perturbations of specific nodes in the network have led to the identification of alternate modes of signalling as well as enabled determining essential nodes in the process. The model presented provides a comprehensive picture of UV mediated signalling manifesting in human skin pigmentation. A systems perspective helps provide a holistic purview of interconnections and complexity in the processes leading to pigmentation. The model described here is extensive yet amenable to expansion as new data is gathered. Through this study, we provide a list of important proteins essential

  10. A Model Based Approach to Increase the Part Accuracy in Robot Based Incremental Sheet Metal Forming

    International Nuclear Information System (INIS)

    Meier, Horst; Laurischkat, Roman; Zhu Junhong

    2011-01-01

    One main influence on the dimensional accuracy in robot based incremental sheet metal forming results from the compliance of the involved robot structures. Compared to conventional machine tools the low stiffness of the robot's kinematic results in a significant deviation of the planned tool path and therefore in a shape of insufficient quality. To predict and compensate these deviations offline, a model based approach, consisting of a finite element approach, to simulate the sheet forming, and a multi body system, modeling the compliant robot structure, has been developed. This paper describes the implementation and experimental verification of the multi body system model and its included compensation method.

  11. A two component model describing nucleon structure functions in the low-x region

    Energy Technology Data Exchange (ETDEWEB)

    Bugaev, E.V. [Institute for Nuclear Research of the Russian Academy of Sciences, 7a, 60th October Anniversary prospect, Moscow 117312 (Russian Federation); Mangazeev, B.V. [Irkutsk State University, 1, Karl Marx Street, Irkutsk 664003 (Russian Federation)

    2009-12-15

    A two component model describing the electromagnetic nucleon structure functions in the low-x region, based on generalized vector dominance and color dipole approaches is briefly described. The model operates with the mesons of rho-family having the mass spectrum of the form m{sub n}{sup 2}=m{sub r}ho{sup 2}(1+2n) and takes into account the nondiagonal transitions in meson-nucleon scattering. The special cut-off factors are introduced in the model, to exclude the gamma-qq-bar-V transitions in the case of narrow qq-bar-pairs. For the color dipole part of the model the well known FKS-parameterization is used.

  12. A physics based method for combining multiple anatomy models with application to medical simulation.

    Science.gov (United States)

    Zhu, Yanong; Magee, Derek; Ratnalingam, Rishya; Kessel, David

    2009-01-01

    We present a physics based approach to the construction of anatomy models by combining components from different sources; different image modalities, protocols, and patients. Given an initial anatomy, a mass-spring model is generated which mimics the physical properties of the solid anatomy components. This helps maintain valid spatial relationships between the components, as well as the validity of their shapes. Combination can be either replacing/modifying an existing component, or inserting a new component. The external forces that deform the model components to fit the new shape are estimated from Gradient Vector Flow and Distance Transform maps. We demonstrate the applicability and validity of the described approach in the area of medical simulation, by showing the processes of non-rigid surface alignment, component replacement, and component insertion.

  13. Alternative approaches to risk-based technical specifications

    International Nuclear Information System (INIS)

    Atefi, B.; Gallagher, D.W.; Liner, R.T.; Lofgren, E.V.

    1987-01-01

    Four alternative risk-based approaches to Technical Specifications are identified. These are: a Probabilistic Risk Assessment (PRA) oriented approach; a reliability goal-oriented approach; an approach based on configuration control; a data-oriented approach. Based on preliminary results, the PRA-oriented approach, which has been developed further than the other approaches, seems to offer a logical, quantitative basis for setting Allowed Outage Times (AOTs) and Surveillance Test Intervals (STIs) for some plant components and systems. The most attractive feature of this approach is that it directly links the AOTs and STIs with the risk associated with the operation of the plant. This would focus the plant operator's and the regulatory agency's attention on the most risk-significant components of the plant. A series of practical issues related to the level of detail and content of the plant PRAs, requirements for the review of these PRAs, and monitoring cf the plant's performance by the regulatory agency must be resolved before the approach could be implemented. Future efforts will examine the other three approaches and their practicality before firm conclusions are drawn regarding the viability of any of these approaches

  14. Radiology information system: a workflow-based approach

    International Nuclear Information System (INIS)

    Zhang, Jinyan; Lu, Xudong; Nie, Hongchao; Huang, Zhengxing; Aalst, W.M.P. van der

    2009-01-01

    Introducing workflow management technology in healthcare seems to be prospective in dealing with the problem that the current healthcare Information Systems cannot provide sufficient support for the process management, although several challenges still exist. The purpose of this paper is to study the method of developing workflow-based information system in radiology department as a use case. First, a workflow model of typical radiology process was established. Second, based on the model, the system could be designed and implemented as a group of loosely coupled components. Each component corresponded to one task in the process and could be assembled by the workflow management system. The legacy systems could be taken as special components, which also corresponded to the tasks and were integrated through transferring non-work- flow-aware interfaces to the standard ones. Finally, a workflow dashboard was designed and implemented to provide an integral view of radiology processes. The workflow-based Radiology Information System was deployed in the radiology department of Zhejiang Chinese Medicine Hospital in China. The results showed that it could be adjusted flexibly in response to the needs of changing process, and enhance the process management in the department. It can also provide a more workflow-aware integration method, comparing with other methods such as IHE-based ones. The workflow-based approach is a new method of developing radiology information system with more flexibility, more functionalities of process management and more workflow-aware integration. The work of this paper is an initial endeavor for introducing workflow management technology in healthcare. (orig.)

  15. CM5: A pre-Swarm magnetic field model based upon the comprehensive modeling approach

    DEFF Research Database (Denmark)

    Sabaka, T.; Olsen, Nils; Tyler, Robert

    2014-01-01

    We have developed a model based upon the very successful Comprehensive Modeling (CM) approach using recent CHAMP, Ørsted, SAC-C and observatory hourly-means data from September 2000 to the end of 2013. This CM, called CM5, was derived from the algorithm that will provide a consistent line of Leve...

  16. Multiple flood vulnerability assessment approach based on fuzzy comprehensive evaluation method and coordinated development degree model.

    Science.gov (United States)

    Yang, Weichao; Xu, Kui; Lian, Jijian; Bin, Lingling; Ma, Chao

    2018-05-01

    Flood is a serious challenge that increasingly affects the residents as well as policymakers. Flood vulnerability assessment is becoming gradually relevant in the world. The purpose of this study is to develop an approach to reveal the relationship between exposure, sensitivity and adaptive capacity for better flood vulnerability assessment, based on the fuzzy comprehensive evaluation method (FCEM) and coordinated development degree model (CDDM). The approach is organized into three parts: establishment of index system, assessment of exposure, sensitivity and adaptive capacity, and multiple flood vulnerability assessment. Hydrodynamic model and statistical data are employed for the establishment of index system; FCEM is used to evaluate exposure, sensitivity and adaptive capacity; and CDDM is applied to express the relationship of the three components of vulnerability. Six multiple flood vulnerability types and four levels are proposed to assess flood vulnerability from multiple perspectives. Then the approach is applied to assess the spatiality of flood vulnerability in Hainan's eastern area, China. Based on the results of multiple flood vulnerability, a decision-making process for rational allocation of limited resources is proposed and applied to the study area. The study shows that multiple flood vulnerability assessment can evaluate vulnerability more completely, and help decision makers learn more information about making decisions in a more comprehensive way. In summary, this study provides a new way for flood vulnerability assessment and disaster prevention decision. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Extracting business vocabularies from business process models: SBVR and BPMN standards-based approach

    Science.gov (United States)

    Skersys, Tomas; Butleris, Rimantas; Kapocius, Kestutis

    2013-10-01

    Approaches for the analysis and specification of business vocabularies and rules are very relevant topics in both Business Process Management and Information Systems Development disciplines. However, in common practice of Information Systems Development, the Business modeling activities still are of mostly empiric nature. In this paper, basic aspects of the approach for business vocabularies' semi-automated extraction from business process models are presented. The approach is based on novel business modeling-level OMG standards "Business Process Model and Notation" (BPMN) and "Semantics for Business Vocabularies and Business Rules" (SBVR), thus contributing to OMG's vision about Model-Driven Architecture (MDA) and to model-driven development in general.

  18. Hybrid modelling framework by using mathematics-based and information-based methods

    International Nuclear Information System (INIS)

    Ghaboussi, J; Kim, J; Elnashai, A

    2010-01-01

    Mathematics-based computational mechanics involves idealization in going from the observed behaviour of a system into mathematical equations representing the underlying mechanics of that behaviour. Idealization may lead mathematical models that exclude certain aspects of the complex behaviour that may be significant. An alternative approach is data-centric modelling that constitutes a fundamental shift from mathematical equations to data that contain the required information about the underlying mechanics. However, purely data-centric methods often fail for infrequent events and large state changes. In this article, a new hybrid modelling framework is proposed to improve accuracy in simulation of real-world systems. In the hybrid framework, a mathematical model is complemented by information-based components. The role of informational components is to model aspects which the mathematical model leaves out. The missing aspects are extracted and identified through Autoprogressive Algorithms. The proposed hybrid modelling framework has a wide range of potential applications for natural and engineered systems. The potential of the hybrid methodology is illustrated through modelling highly pinched hysteretic behaviour of beam-to-column connections in steel frames.

  19. Designing evidence-based medicine training to optimize the transfer of skills from the classroom to clinical practice: applying the four component instructional design model.

    Science.gov (United States)

    Maggio, Lauren A; Cate, Olle Ten; Irby, David M; O'Brien, Bridget C

    2015-11-01

    Evidence-based medicine (EBM) skills, although taught in medical schools around the world, are not optimally practiced in clinical environments because of multiple barriers, including learners' difficulty transferring EBM skills learned in the classroom to clinical practice. This lack of skill transfer may be partially due to the design of EBM training. To facilitate the transfer of EBM skills from the classroom to clinical practice, the authors explore one instructional approach, called the Four Component Instructional Design (4C/ID) model, to guide the design of EBM training. On the basis of current cognitive psychology, including cognitive load theory, the premise of the 4C/ID model is that complex skills training, such as EBM training, should include four components: learning tasks, supportive information, procedural information, and part-task practice. The combination of these four components can inform the creation of complex skills training that is designed to avoid overloading learners' cognitive abilities; to facilitate the integration of the knowledge, skills, and attitudes needed to execute a complex task; and to increase the transfer of knowledge to new situations. The authors begin by introducing the 4C/ID model and describing the benefits of its four components to guide the design of EBM training. They include illustrative examples of educational practices that are consistent with each component and that can be applied to teaching EBM. They conclude by suggesting that medical educators consider adopting the 4C/ID model to design, modify, and/or implement EBM training in classroom and clinical settings.

  20. Efficient transfer of sensitivity information in multi-component models

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Rabiti, Cristian

    2011-01-01

    In support of adjoint-based sensitivity analysis, this manuscript presents a new method to efficiently transfer adjoint information between components in a multi-component model, whereas the output of one component is passed as input to the next component. Often, one is interested in evaluating the sensitivities of the responses calculated by the last component to the inputs of the first component in the overall model. The presented method has two advantages over existing methods which may be classified into two broad categories: brute force-type methods and amalgamated-type methods. First, the presented method determines the minimum number of adjoint evaluations for each component as opposed to the brute force-type methods which require full evaluation of all sensitivities for all responses calculated by each component in the overall model, which proves computationally prohibitive for realistic problems. Second, the new method treats each component as a black-box as opposed to amalgamated-type methods which requires explicit knowledge of the system of equations associated with each component in order to reach the minimum number of adjoint evaluations. (author)

  1. Polarized BRDF for coatings based on three-component assumption

    Science.gov (United States)

    Liu, Hong; Zhu, Jingping; Wang, Kai; Xu, Rong

    2017-02-01

    A pBRDF(polarized bidirectional reflection distribution function) model for coatings is given based on three-component reflection assumption in order to improve the polarized scattering simulation capability for space objects. In this model, the specular reflection is given based on microfacet theory, the multiple reflection and volume scattering are given separately according to experimental results. The polarization of specular reflection is considered from Fresnel's law, and both multiple reflection and volume scattering are assumed depolarized. Simulation and measurement results of two satellite coating samples SR107 and S781 are given to validate that the pBRDF modeling accuracy can be significantly improved by the three-component model given in this paper.

  2. A Raster Based Approach To Solar Pressure Modeling

    Science.gov (United States)

    Wright, Theodore

    2014-01-01

    The impact of photons upon a spacecraft introduces small forces and moments. The magnitude and direction of the forces depend on the material properties of the spacecraft components being illuminated. Which components are being lit depends on the orientation of the craft with respect to the Sun as well as the gimbal angles for any significant moving external parts (solar arrays, typically). Some components may shield others from the Sun.To determine solar pressure in the presence overlapping components, a 3D model can be used to determine which components are illuminated. A view (image) of the model as seen from the Sun shows the only contributors to solar pressure. This image can be decomposed into pixels, each of which can be treated as a non-overlapping flat plate as far as solar pressure calculations are concerned. The sums of the pressures and moments on these plates approximate the solar pressure and moments on the entire vehicle.The image rasterization technique can also be used to compute other spacecraft attributes that are dependent on attitude and geometry, including solar array power generation capability and free molecular flow drag.

  3. Risk-based technical specifications: Development and application of an approach to the generation of a plant specific real-time risk model

    International Nuclear Information System (INIS)

    Puglia, B.; Gallagher, D.; Amico, P.; Atefi, B.

    1992-10-01

    This report describes a process developed to convert an existing PRA into a model amenable to real time, risk-based technical specification calculations. In earlier studies (culminating in NUREG/CR-5742), several risk-based approaches to technical specification were evaluated. A real-time approach using a plant specific PRA capable of modeling plant configurations as they change was identified as the most comprehensive approach to control plant risk. A master fault tree logic model representative of-all of the core damage sequences was developed. Portions of the system fault trees were modularized and supercomponents comprised of component failures with similar effects were developed to reduce the size of the model and, quantification times. Modifications to the master fault tree logic were made to properly model the effect of maintenance and recovery actions. Fault trees representing several actuation systems not modeled in detail in the existing PRA were added to the master fault tree logic. This process was applied to the Surry NUREG-1150 Level 1 PRA. The master logic mode was confirmed. The model was then used to evaluate frequency associated with several plant configurations using the IRRAS code. For all cases analyzed computational time was less than three minutes. This document Volume 2, contains appendices A, B, and C. These provide, respectively: Surry Technical Specifications Model Database, Surry Technical Specifications Model, and a list of supercomponents used in the Surry Technical Specifications Model

  4. Computationally efficient and flexible modular modelling approach for river and urban drainage systems based on surrogate conceptual models

    Science.gov (United States)

    Wolfs, Vincent; Willems, Patrick

    2015-04-01

    Water managers rely increasingly on mathematical simulation models that represent individual parts of the water system, such as the river, sewer system or waste water treatment plant. The current evolution towards integral water management requires the integration of these distinct components, leading to an increased model scale and scope. Besides this growing model complexity, certain applications gained interest and importance, such as uncertainty and sensitivity analyses, auto-calibration of models and real time control. All these applications share the need for models with a very limited calculation time, either for performing a large number of simulations, or a long term simulation followed by a statistical post-processing of the results. The use of the commonly applied detailed models that solve (part of) the de Saint-Venant equations is infeasible for these applications or such integrated modelling due to several reasons, of which a too long simulation time and the inability to couple submodels made in different software environments are the main ones. Instead, practitioners must use simplified models for these purposes. These models are characterized by empirical relationships and sacrifice model detail and accuracy for increased computational efficiency. The presented research discusses the development of a flexible integral modelling platform that complies with the following three key requirements: (1) Include a modelling approach for water quantity predictions for rivers, floodplains, sewer systems and rainfall runoff routing that require a minimal calculation time; (2) A fast and semi-automatic model configuration, thereby making maximum use of data of existing detailed models and measurements; (3) Have a calculation scheme based on open source code to allow for future extensions or the coupling with other models. First, a novel and flexible modular modelling approach based on the storage cell concept was developed. This approach divides each

  5. A probabilistic approach to the drag-based model

    Science.gov (United States)

    Napoletano, Gianluca; Forte, Roberta; Moro, Dario Del; Pietropaolo, Ermanno; Giovannelli, Luca; Berrilli, Francesco

    2018-02-01

    The forecast of the time of arrival (ToA) of a coronal mass ejection (CME) to Earth is of critical importance for our high-technology society and for any future manned exploration of the Solar System. As critical as the forecast accuracy is the knowledge of its precision, i.e. the error associated to the estimate. We propose a statistical approach for the computation of the ToA using the drag-based model by introducing the probability distributions, rather than exact values, as input parameters, thus allowing the evaluation of the uncertainty on the forecast. We test this approach using a set of CMEs whose transit times are known, and obtain extremely promising results: the average value of the absolute differences between measure and forecast is 9.1h, and half of these residuals are within the estimated errors. These results suggest that this approach deserves further investigation. We are working to realize a real-time implementation which ingests the outputs of automated CME tracking algorithms as inputs to create a database of events useful for a further validation of the approach.

  6. Deep Appearance Models: A Deep Boltzmann Machine Approach for Face Modeling

    OpenAIRE

    Duong, Chi Nhan; Luu, Khoa; Quach, Kha Gia; Bui, Tien D.

    2016-01-01

    The "interpretation through synthesis" approach to analyze face images, particularly Active Appearance Models (AAMs) method, has become one of the most successful face modeling approaches over the last two decades. AAM models have ability to represent face images through synthesis using a controllable parameterized Principal Component Analysis (PCA) model. However, the accuracy and robustness of the synthesized faces of AAM are highly depended on the training sets and inherently on the genera...

  7. A kernel principal component analysis–based degradation model and remaining useful life estimation for the turbofan engine

    Directory of Open Access Journals (Sweden)

    Delong Feng

    2016-05-01

    Full Text Available Remaining useful life estimation of the prognostics and health management technique is a complicated and difficult research question for maintenance. In this article, we consider the problem of prognostics modeling and estimation of the turbofan engine under complicated circumstances and propose a kernel principal component analysis–based degradation model and remaining useful life estimation method for such aircraft engine. We first analyze the output data created by the turbofan engine thermodynamic simulation that is based on the kernel principal component analysis method and then distinguish the qualitative and quantitative relationships between the key factors. Next, we build a degradation model for the engine fault based on the following assumptions: the engine has only had constant failure (i.e. no sudden failure is included, and the engine has a Wiener process, which is a covariate stand for the engine system drift. To predict the remaining useful life of the turbofan engine, we built a health index based on the degradation model and used the method of maximum likelihood and the data from the thermodynamic simulation model to estimate the parameters of this degradation model. Through the data analysis, we obtained a trend model of the regression curve line that fits with the actual statistical data. Based on the predicted health index model and the data trend model, we estimate the remaining useful life of the aircraft engine as the index reaches zero. At last, a case study involving engine simulation data demonstrates the precision and performance advantages of this prediction method that we propose. At last, a case study involving engine simulation data demonstrates the precision and performance advantages of this proposed method, the precision of the method can reach to 98.9% and the average precision is 95.8%.

  8. Verifying Embedded Systems using Component-based Runtime Observers

    DEFF Research Database (Denmark)

    Guan, Wei; Marian, Nicolae; Angelov, Christo K.

    against formally specified properties. This paper presents a component-based design method for runtime observers, which are configured from instances of prefabricated reusable components---Predicate Evaluator (PE) and Temporal Evaluator (TE). The PE computes atomic propositions for the TE; the latter...... is a reconfigurable component processing a data structure, representing the state transition diagram of a non-deterministic state machine, i.e. a Buchi automaton derived from a system property specified in Linear Temporal Logic (LTL). Observer components have been implemented using design models and design patterns...

  9. Precipitation in Powder Metallurgy, Nickel Base Superalloys: Review of Modeling Approach and Formulation of Engineering (Postprint)

    Science.gov (United States)

    2016-12-01

    AFRL-RX-WP-JA-2016-0333 PRECIPITATION IN POWDER- METALLURGY , NICKEL-BASE SUPERALLOYS: REVIEW OF MODELING APPROACH AND FORMULATION OF...PRECIPITATION IN POWDER- METALLURGY , NICKEL- BASE SUPERALLOYS: REVIEW OF MODELING APPROACH AND FORMULATION OF ENGINEERING (POSTPRINT) 5a...and kinetic parameters required for the modeling of γ′ precipitation in powder- metallurgy (PM), nickel-base superalloys are summarized. These

  10. Principal Component Clustering Approach to Teaching Quality Discriminant Analysis

    Science.gov (United States)

    Xian, Sidong; Xia, Haibo; Yin, Yubo; Zhai, Zhansheng; Shang, Yan

    2016-01-01

    Teaching quality is the lifeline of the higher education. Many universities have made some effective achievement about evaluating the teaching quality. In this paper, we establish the Students' evaluation of teaching (SET) discriminant analysis model and algorithm based on principal component clustering analysis. Additionally, we classify the SET…

  11. An information theory-based approach to modeling the information processing of NPP operators

    International Nuclear Information System (INIS)

    Kim, Jong Hyun; Seong, Poong Hyun

    2002-01-01

    This paper proposes a quantitative approach to modeling the information processing of NPP operators. The aim of this work is to derive the amount of the information processed during a certain control task. The focus will be on i) developing a model for information processing of NPP operators and ii) quantifying the model. To resolve the problems of the previous approaches based on the information theory, i.e. the problems of single channel approaches, we primarily develop the information processing model having multiple stages, which contains information flows. Then the uncertainty of the information is quantified using the Conant's model, a kind of information theory

  12. Simulation-based model checking approach to cell fate specification during Caenorhabditis elegans vulval development by hybrid functional Petri net with extension

    Directory of Open Access Journals (Sweden)

    Ueno Kazuko

    2009-04-01

    Full Text Available Abstract Background Model checking approaches were applied to biological pathway validations around 2003. Recently, Fisher et al. have proved the importance of model checking approach by inferring new regulation of signaling crosstalk in C. elegans and confirming the regulation with biological experiments. They took a discrete and state-based approach to explore all possible states of the system underlying vulval precursor cell (VPC fate specification for desired properties. However, since both discrete and continuous features appear to be an indispensable part of biological processes, it is more appropriate to use quantitative models to capture the dynamics of biological systems. Our key motivation of this paper is to establish a quantitative methodology to model and analyze in silico models incorporating the use of model checking approach. Results A novel method of modeling and simulating biological systems with the use of model checking approach is proposed based on hybrid functional Petri net with extension (HFPNe as the framework dealing with both discrete and continuous events. Firstly, we construct a quantitative VPC fate model with 1761 components by using HFPNe. Secondly, we employ two major biological fate determination rules – Rule I and Rule II – to VPC fate model. We then conduct 10,000 simulations for each of 48 sets of different genotypes, investigate variations of cell fate patterns under each genotype, and validate the two rules by comparing three simulation targets consisting of fate patterns obtained from in silico and in vivo experiments. In particular, an evaluation was successfully done by using our VPC fate model to investigate one target derived from biological experiments involving hybrid lineage observations. However, the understandings of hybrid lineages are hard to make on a discrete model because the hybrid lineage occurs when the system comes close to certain thresholds as discussed by Sternberg and Horvitz in

  13. Lattice Boltzmann based multicomponent reactive transport model coupled with geochemical solver for scale simulations

    NARCIS (Netherlands)

    Patel, R.A.; Perko, J.; Jaques, D.; De Schutter, G.; Ye, G.; Van Breugel, K.

    2013-01-01

    A Lattice Boltzmann (LB) based reactive transport model intended to capture reactions and solid phase changes occurring at the pore scale is presented. The proposed approach uses LB method to compute multi component mass transport. The LB multi-component transport model is then coupled with the

  14. A Novel GMM-Based Behavioral Modeling Approach for Smartwatch-Based Driver Authentication.

    Science.gov (United States)

    Yang, Ching-Han; Chang, Chin-Chun; Liang, Deron

    2018-03-28

    All drivers have their own distinct driving habits, and usually hold and operate the steering wheel differently in different driving scenarios. In this study, we proposed a novel Gaussian mixture model (GMM)-based method that can improve the traditional GMM in modeling driving behavior. This new method can be applied to build a better driver authentication system based on the accelerometer and orientation sensor of a smartwatch. To demonstrate the feasibility of the proposed method, we created an experimental system that analyzes driving behavior using the built-in sensors of a smartwatch. The experimental results for driver authentication-an equal error rate (EER) of 4.62% in the simulated environment and an EER of 7.86% in the real-traffic environment-confirm the feasibility of this approach.

  15. A multi-model ensemble approach to seabed mapping

    Science.gov (United States)

    Diesing, Markus; Stephens, David

    2015-06-01

    Seabed habitat mapping based on swath acoustic data and ground-truth samples is an emergent and active marine science discipline. Significant progress could be achieved by transferring techniques and approaches that have been successfully developed and employed in such fields as terrestrial land cover mapping. One such promising approach is the multiple classifier system, which aims at improving classification performance by combining the outputs of several classifiers. Here we present results of a multi-model ensemble applied to multibeam acoustic data covering more than 5000 km2 of seabed in the North Sea with the aim to derive accurate spatial predictions of seabed substrate. A suite of six machine learning classifiers (k-Nearest Neighbour, Support Vector Machine, Classification Tree, Random Forest, Neural Network and Naïve Bayes) was trained with ground-truth sample data classified into seabed substrate classes and their prediction accuracy was assessed with an independent set of samples. The three and five best performing models were combined to classifier ensembles. Both ensembles led to increased prediction accuracy as compared to the best performing single classifier. The improvements were however not statistically significant at the 5% level. Although the three-model ensemble did not perform significantly better than its individual component models, we noticed that the five-model ensemble did perform significantly better than three of the five component models. A classifier ensemble might therefore be an effective strategy to improve classification performance. Another advantage is the fact that the agreement in predicted substrate class between the individual models of the ensemble could be used as a measure of confidence. We propose a simple and spatially explicit measure of confidence that is based on model agreement and prediction accuracy.

  16. Combined approach based on principal component analysis and canonical discriminant analysis for investigating hyperspectral plant response

    Directory of Open Access Journals (Sweden)

    Anna Maria Stellacci

    2012-07-01

    Full Text Available Hyperspectral (HS data represents an extremely powerful means for rapidly detecting crop stress and then aiding in the rational management of natural resources in agriculture. However, large volume of data poses a challenge for data processing and extracting crucial information. Multivariate statistical techniques can play a key role in the analysis of HS data, as they may allow to both eliminate redundant information and identify synthetic indices which maximize differences among levels of stress. In this paper we propose an integrated approach, based on the combined use of Principal Component Analysis (PCA and Canonical Discriminant Analysis (CDA, to investigate HS plant response and discriminate plant status. The approach was preliminary evaluated on a data set collected on durum wheat plants grown under different nitrogen (N stress levels. Hyperspectral measurements were performed at anthesis through a high resolution field spectroradiometer, ASD FieldSpec HandHeld, covering the 325-1075 nm region. Reflectance data were first restricted to the interval 510-1000 nm and then divided into five bands of the electromagnetic spectrum [green: 510-580 nm; yellow: 581-630 nm; red: 631-690 nm; red-edge: 705-770 nm; near-infrared (NIR: 771-1000 nm]. PCA was applied to each spectral interval. CDA was performed on the extracted components to identify the factors maximizing the differences among plants fertilised with increasing N rates. Within the intervals of green, yellow and red only the first principal component (PC had an eigenvalue greater than 1 and explained more than 95% of total variance; within the ranges of red-edge and NIR, the first two PCs had an eigenvalue higher than 1. Two canonical variables explained cumulatively more than 81% of total variance and the first was able to discriminate wheat plants differently fertilised, as confirmed also by the significant correlation with aboveground biomass and grain yield parameters. The combined

  17. Innovative Approaches to Large Component Packaging

    International Nuclear Information System (INIS)

    Freitag, A.; Hooper, M.; Posivak, E.; Sullivan, J.

    2006-01-01

    Radioactive waste disposal often times requires creative approaches in packaging design, especially for large components. Innovative design techniques are required to meet the needs for handling, transporting, and disposing of these large packages. Large components (i.e., Reactor Pressure Vessel (RPV) heads and even RPVs themselves) require special packaging for shielding and contamination control, as well as for transport and disposal. WMG Inc designed and used standard packaging for RPV heads without control rod drive mechanisms (CRDMs) attached for five RPV heads and has also more recently met an even bigger challenge and developed the innovative Intact Vessel Head Transport System (IVHTS) for RPV heads with CRDMs intact. This packaging system has been given a manufacturer's exemption by the United States Department of Transportation (USDOT) for packaging RPV heads. The IVHTS packaging has now been successfully used at two commercial nuclear power plants. Another example of innovative packaging is the large component packaging that WMG designed, fabricated, and utilized at the West Valley Demonstration Project (WVDP). In 2002, West Valley's high-level waste vitrification process was shut down in preparation for D and D of the West Valley Vitrification Facility. Three of the major components of concern within the Vitrification Facility were the Melter, the Concentrate Feed Makeup Tank (CFMT), and the Melter Feed Holdup Tank (MFHT). The removal, packaging, and disposition of these three components presented significant radiological and handling challenges for the project. WMG designed, fabricated, and installed special packaging for the transport and disposal of each of these three components, which eliminated an otherwise time intensive and costly segmentation process that WVDP was considering. Finally, WMG has also designed and fabricated special packaging for both the Connecticut Yankee (CY) and San Onofre Nuclear Generating Station (SONGS) RPVs. This paper

  18. Analysis of appraisal tool of system security engineering capability maturity based on component

    International Nuclear Information System (INIS)

    Liu Zhenghai; Yang Xiaohua; Zou Shuliang; Liu Yachun; Xiao Jiantian; Liu Zhiming

    2012-01-01

    Spent Fuel Reprocessing is a part of nuclear fuel cycle and is the inevitably choice of nuclear power sustainable development. Reprocessing needs to face with radiological, criticality, chemical hazards. Besides using the tradition appraisal methods based on the security goals, it is a beneficial supplement that using the appraisal method of system security engineering capability maturity model based on the process. Experts should check and approve large numbers of documents during the appraisal based on system security engineering capability maturity model, so it is necessary that developing a tool to assist the expert to complete the appraisal. The method of developing software based on component is highly effective, nimble and reliable. Component technology is analyzed, the methods of extraction model domain components and general components is introduced, and the appraisal system is developed based on component technology. (authors)

  19. A Network-Based Approach to Modeling and Predicting Product Coconsideration Relations

    Directory of Open Access Journals (Sweden)

    Zhenghui Sha

    2018-01-01

    Full Text Available Understanding customer preferences in consideration decisions is critical to choice modeling in engineering design. While existing literature has shown that the exogenous effects (e.g., product and customer attributes are deciding factors in customers’ consideration decisions, it is not clear how the endogenous effects (e.g., the intercompetition among products would influence such decisions. This paper presents a network-based approach based on Exponential Random Graph Models to study customers’ consideration behaviors according to engineering design. Our proposed approach is capable of modeling the endogenous effects among products through various network structures (e.g., stars and triangles besides the exogenous effects and predicting whether two products would be conisdered together. To assess the proposed model, we compare it against the dyadic network model that only considers exogenous effects. Using buyer survey data from the China automarket in 2013 and 2014, we evaluate the goodness of fit and the predictive power of the two models. The results show that our model has a better fit and predictive accuracy than the dyadic network model. This underscores the importance of the endogenous effects on customers’ consideration decisions. The insights gained from this research help explain how endogenous effects interact with exogeous effects in affecting customers’ decision-making.

  20. Modeling crowd behavior based on the discrete-event multiagent approach

    OpenAIRE

    Лановой, Алексей Феликсович; Лановой, Артем Алексеевич

    2014-01-01

    The crowd is a temporary, relatively unorganized group of people, who are in close physical contact with each other. Individual behavior of human outside the crowd is determined by many factors, associated with his intellectual activities, but inside the crowd the man loses his identity and begins to obey more simple laws of behavior.One of approaches to the construction of multi-level model of the crowd using discrete-event multiagent approach was described in the paper.Based on this analysi...

  1. A Generalized Adsorption Rate Model Based on the Limiting-Component Constraint in Ion-Exchange Chromatographic Separation for Multicomponent Systems

    DEFF Research Database (Denmark)

    such that conventional LDF (linear driving force) type models are extended to inactive zones without loosing their generality. Based on a limiting component constraint, an exchange probability kernel is developed for multi-component systems. The LDF-type model with the kernel is continuous with time and axial direction....... Two tuning parameters such as concentration layer thickness and function change rate at the threshold point are needed for the probability kernels, which are not sensitive to problems considered....

  2. An Adaptive Agent-Based Model of Homing Pigeons: A Genetic Algorithm Approach

    Directory of Open Access Journals (Sweden)

    Francis Oloo

    2017-01-01

    Full Text Available Conventionally, agent-based modelling approaches start from a conceptual model capturing the theoretical understanding of the systems of interest. Simulation outcomes are then used “at the end” to validate the conceptual understanding. In today’s data rich era, there are suggestions that models should be data-driven. Data-driven workflows are common in mathematical models. However, their application to agent-based models is still in its infancy. Integration of real-time sensor data into modelling workflows opens up the possibility of comparing simulations against real data during the model run. Calibration and validation procedures thus become automated processes that are iteratively executed during the simulation. We hypothesize that incorporation of real-time sensor data into agent-based models improves the predictive ability of such models. In particular, that such integration results in increasingly well calibrated model parameters and rule sets. In this contribution, we explore this question by implementing a flocking model that evolves in real-time. Specifically, we use genetic algorithms approach to simulate representative parameters to describe flight routes of homing pigeons. The navigation parameters of pigeons are simulated and dynamically evaluated against emulated GPS sensor data streams and optimised based on the fitness of candidate parameters. As a result, the model was able to accurately simulate the relative-turn angles and step-distance of homing pigeons. Further, the optimised parameters could replicate loops, which are common patterns in flight tracks of homing pigeons. Finally, the use of genetic algorithms in this study allowed for a simultaneous data-driven optimization and sensitivity analysis.

  3. A Cluster-based Approach Towards Detecting and Modeling Network Dictionary Attacks

    Directory of Open Access Journals (Sweden)

    A. Tajari Siahmarzkooh

    2016-12-01

    Full Text Available In this paper, we provide an approach to detect network dictionary attacks using a data set collected as flows based on which a clustered graph is resulted. These flows provide an aggregated view of the network traffic in which the exchanged packets in the network are considered so that more internally connected nodes would be clustered. We show that dictionary attacks could be detected through some parameters namely the number and the weight of clusters in time series and their evolution over the time. Additionally, the Markov model based on the average weight of clusters,will be also created. Finally, by means of our suggested model, we demonstrate that artificial clusters of the flows are created for normal and malicious traffic. The results of the proposed approach on CAIDA 2007 data set suggest a high accuracy for the model and, therefore, it provides a proper method for detecting the dictionary attack.

  4. A Fault Diagnosis Approach for Gears Based on IMF AR Model and SVM

    Directory of Open Access Journals (Sweden)

    Yu Yang

    2008-05-01

    Full Text Available An accurate autoregressive (AR model can reflect the characteristics of a dynamic system based on which the fault feature of gear vibration signal can be extracted without constructing mathematical model and studying the fault mechanism of gear vibration system, which are experienced by the time-frequency analysis methods. However, AR model can only be applied to stationary signals, while the gear fault vibration signals usually present nonstationary characteristics. Therefore, empirical mode decomposition (EMD, which can decompose the vibration signal into a finite number of intrinsic mode functions (IMFs, is introduced into feature extraction of gear vibration signals as a preprocessor before AR models are generated. On the other hand, by targeting the difficulties of obtaining sufficient fault samples in practice, support vector machine (SVM is introduced into gear fault pattern recognition. In the proposed method in this paper, firstly, vibration signals are decomposed into a finite number of intrinsic mode functions, then the AR model of each IMF component is established; finally, the corresponding autoregressive parameters and the variance of remnant are regarded as the fault characteristic vectors and used as input parameters of SVM classifier to classify the working condition of gears. The experimental analysis results show that the proposed approach, in which IMF AR model and SVM are combined, can identify working condition of gears with a success rate of 100% even in the case of smaller number of samples.

  5. A feature-based approach to modeling protein-DNA interactions.

    Directory of Open Access Journals (Sweden)

    Eilon Sharon

    Full Text Available Transcription factor (TF binding to its DNA target site is a fundamental regulatory interaction. The most common model used to represent TF binding specificities is a position specific scoring matrix (PSSM, which assumes independence between binding positions. However, in many cases, this simplifying assumption does not hold. Here, we present feature motif models (FMMs, a novel probabilistic method for modeling TF-DNA interactions, based on log-linear models. Our approach uses sequence features to represent TF binding specificities, where each feature may span multiple positions. We develop the mathematical formulation of our model and devise an algorithm for learning its structural features from binding site data. We also developed a discriminative motif finder, which discovers de novo FMMs that are enriched in target sets of sequences compared to background sets. We evaluate our approach on synthetic data and on the widely used TF chromatin immunoprecipitation (ChIP dataset of Harbison et al. We then apply our algorithm to high-throughput TF ChIP data from mouse and human, reveal sequence features that are present in the binding specificities of mouse and human TFs, and show that FMMs explain TF binding significantly better than PSSMs. Our FMM learning and motif finder software are available at http://genie.weizmann.ac.il/.

  6. Modeling Oil Exploration and Production: Resource-Constrained and Agent-Based Approaches

    International Nuclear Information System (INIS)

    Jakobsson, Kristofer

    2010-05-01

    Energy is essential to the functioning of society, and oil is the single largest commercial energy source. Some analysts have concluded that the peak in oil production is soon about to happen on the global scale, while others disagree. Such incompatible views can persist because the issue of 'peak oil' cuts through the established scientific disciplines. The question is: what characterizes the modeling approaches that are available today, and how can they be further developed to improve a trans-disciplinary understanding of oil depletion? The objective of this thesis is to present long-term scenarios of oil production (Paper I) using a resource-constrained model; and an agent-based model of the oil exploration process (Paper II). It is also an objective to assess the strengths, limitations, and future development potentials of resource-constrained modeling, analytical economic modeling, and agent-based modeling. Resource-constrained models are only suitable when the time frame is measured in decades, but they can give a rough indication of which production scenarios are reasonable given the size of the resource. However, the models are comprehensible, transparent and the only feasible long-term forecasting tools at present. It is certainly possible to distinguish between reasonable scenarios, based on historically observed parameter values, and unreasonable scenarios with parameter values obtained through flawed analogy. The economic subfield of optimal depletion theory is founded on the notion of rational economic agents, and there is a causal relation between decisions made at the micro-level and the macro-result. In terms of future improvements, however, the analytical form considerably restricts the versatility of the approach. Agent-based modeling makes it feasible to combine economically motivated agents with a physical environment. An example relating to oil exploration is given in Paper II, where it is shown that the exploratory activities of individual

  7. Composing simulations using persistent software components

    Energy Technology Data Exchange (ETDEWEB)

    Holland, J.V.; Michelsen, R.E.; Powell, D.R.; Upton, S.C.; Thompson, D.R.

    1999-03-01

    The traditional process for developing large-scale simulations is cumbersome, time consuming, costly, and in some cases, inadequate. The topics of software components and component-based software engineering are being explored by software professionals in academic and industrial settings. A component is a well-delineated, relatively independent, and replaceable part of a software system that performs a specific function. Many researchers have addressed the potential to derive a component-based approach to simulations in general, and a few have focused on military simulations in particular. In a component-based approach, functional or logical blocks of the simulation entities are represented as coherent collections of components satisfying explicitly defined interface requirements. A simulation is a top-level aggregate comprised of a collection of components that interact with each other in the context of a simulated environment. A component may represent a simulation artifact, an agent, or any entity that can generated events affecting itself, other simulated entities, or the state of the system. The component-based approach promotes code reuse, contributes to reducing time spent validating or verifying models, and promises to reduce the cost of development while still delivering tailored simulations specific to analysis questions. The Integrated Virtual Environment for Simulation (IVES) is a composition-centered framework to achieve this potential. IVES is a Java implementation of simulation composition concepts developed at Los Alamos National Laboratory for use in several application domains. In this paper, its use in the military domain is demonstrated via the simulation of dismounted infantry in an urban environment.

  8. Towards a CPN-Based Modelling Approach for Reconciling Verification and Implementation of Protocol Models

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge; Kristensen, Lars Michael

    2013-01-01

    Formal modelling of protocols is often aimed at one specific purpose such as verification or automatically generating an implementation. This leads to models that are useful for one purpose, but not for others. Being able to derive models for verification and implementation from a single model...... is beneficial both in terms of reduced total modelling effort and confidence that the verification results are valid also for the implementation model. In this paper we introduce the concept of a descriptive specification model and an approach based on refining a descriptive model to target both verification...... how this model can be refined to target both verification and implementation....

  9. A Unified Approach to Functional Principal Component Analysis and Functional Multiple-Set Canonical Correlation.

    Science.gov (United States)

    Choi, Ji Yeh; Hwang, Heungsun; Yamamoto, Michio; Jung, Kwanghee; Woodward, Todd S

    2017-06-01

    Functional principal component analysis (FPCA) and functional multiple-set canonical correlation analysis (FMCCA) are data reduction techniques for functional data that are collected in the form of smooth curves or functions over a continuum such as time or space. In FPCA, low-dimensional components are extracted from a single functional dataset such that they explain the most variance of the dataset, whereas in FMCCA, low-dimensional components are obtained from each of multiple functional datasets in such a way that the associations among the components are maximized across the different sets. In this paper, we propose a unified approach to FPCA and FMCCA. The proposed approach subsumes both techniques as special cases. Furthermore, it permits a compromise between the techniques, such that components are obtained from each set of functional data to maximize their associations across different datasets, while accounting for the variance of the data well. We propose a single optimization criterion for the proposed approach, and develop an alternating regularized least squares algorithm to minimize the criterion in combination with basis function approximations to functions. We conduct a simulation study to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of multiple-subject functional magnetic resonance imaging data to obtain low-dimensional components of blood-oxygen level-dependent signal changes of the brain over time, which are highly correlated across the subjects as well as representative of the data. The extracted components are used to identify networks of neural activity that are commonly activated across the subjects while carrying out a working memory task.

  10. A Component-Based Approach for Securing Indoor Home Care Applications.

    Science.gov (United States)

    Agirre, Aitor; Armentia, Aintzane; Estévez, Elisabet; Marcos, Marga

    2017-12-26

    eHealth systems have adopted recent advances on sensing technologies together with advances in information and communication technologies (ICT) in order to provide people-centered services that improve the quality of life of an increasingly elderly population. As these eHealth services are founded on the acquisition and processing of sensitive data (e.g., personal details, diagnosis, treatments and medical history), any security threat would damage the public's confidence in them. This paper proposes a solution for the design and runtime management of indoor eHealth applications with security requirements. The proposal allows applications definition customized to patient particularities, including the early detection of health deterioration and suitable reaction (events) as well as security needs. At runtime, security support is twofold. A secured component-based platform supervises applications execution and provides events management, whilst the security of the communications among application components is also guaranteed. Additionally, the proposed event management scheme adopts the fog computing paradigm to enable local event related data storage and processing, thus saving communication bandwidth when communicating with the cloud. As a proof of concept, this proposal has been validated through the monitoring of the health status in diabetic patients at a nursing home.

  11. Sparse principal component analysis in medical shape modeling

    Science.gov (United States)

    Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus

    2006-03-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.

  12. A Framework-Based Approach for Fault-Tolerant Service Robots

    Directory of Open Access Journals (Sweden)

    Heejune Ahn

    2012-11-01

    Full Text Available Recently the component-based approach has become a major trend in intelligent service robot development due to its reusability and productivity. The framework in a component-based system should provide essential services for application components. However, to our knowledge the existing robot frameworks do not yet support fault tolerance service. Moreover, it is often believed that faults can be handled only at the application level. In this paper, by extending the robot framework with the fault tolerance function, we argue that the framework-based fault tolerance approach is feasible and even has many benefits, including that: 1 the system integrators can build fault tolerance applications from non-fault-aware components; 2 the constraints of the components and the operating environment can be considered at the time of integration, which – cannot be anticipated eaily at the time of component development; 3 consistency in system reliability can be obtained even in spite of diverse application component sources. In the proposed construction, we build XML rule files defining the rules for probing and determining the fault conditions of each component, contamination cases from a faulty component, and the possible recovery and safety methods. The rule files are established by a system integrator and the fault manager in the framework controls the fault tolerance process according to the rules. We demonstrate that the fault-tolerant framework can incorporate widely accepted fault tolerance techniques. The effectiveness and real-time performance of the framework-based approach and its techniques are examined by testing an autonomous mobile robot in typical fault scenarios.

  13. Model based approach to Study the Impact of Biofuels on the Sustainability of an Ecological System

    Science.gov (United States)

    The importance and complexity of sustainability has been well recognized and a formal study of sustainability based on system theory approaches is imperative as many of the relationships between various components of the ecosystem could be nonlinear, intertwined and non intuitive...

  14. A dynamic texture based approach to recognition of facial actions and their temporal models

    NARCIS (Netherlands)

    Koelstra, Sander; Pantic, Maja; Patras, Ioannis (Yannis)

    2010-01-01

    In this work, we propose a dynamic texture-based approach to the recognition of facial Action Units (AUs, atomic facial gestures) and their temporal models (i.e., sequences of temporal segments: neutral, onset, apex, and offset) in near-frontal-view face videos. Two approaches to modeling the

  15. Nernst-Planck Based Description of Transport, Coulombic Interactions and Geochemical Reactions in Porous Media: Modeling Approach and Benchmark Experiments

    DEFF Research Database (Denmark)

    Rolle, Massimo; Sprocati, Riccardo; Masi, Matteo

    2018-01-01

    ‐ but also under advection‐dominated flow regimes. To accurately describe charge effects in flow‐through systems, we propose a multidimensional modeling approach based on the Nernst‐Planck formulation of diffusive/dispersive fluxes. The approach is implemented with a COMSOL‐PhreeqcRM coupling allowing us......, and high‐resolution experimental datasets. The latter include flow‐through experiments that have been carried out in this study to explore the effects of electrostatic interactions in fully three‐dimensional setups. The results of the simulations show excellent agreement for all the benchmarks problems...... the quantification and visualization of the specific contributions to the diffusive/dispersive Nernst‐Planck fluxes, including the Fickian component, the term arising from the activity coefficient gradients, and the contribution due to electromigration....

  16. The construction of life prediction models for the design of Stirling engine heater components

    Science.gov (United States)

    Petrovich, A.; Bright, A.; Cronin, M.; Arnold, S.

    1983-01-01

    The service life of Stirling-engine heater structures of Fe-based high-temperature alloys is predicted using a numerical model based on a linear-damage approach and published test data (engine test data for a Co-based alloy and tensile-test results for both the Co-based and the Fe-based alloys). The operating principle of the automotive Stirling engine is reviewed; the economic and technical factors affecting the choice of heater material are surveyed; the test results are summarized in tables and graphs; the engine environment and automotive duty cycle are characterized; and the modeling procedure is explained. It is found that the statistical scatter of the fatigue properties of the heater components needs to be reduced (by decreasing the porosity of the cast material or employing wrought material in fatigue-prone locations) before the accuracy of life predictions can be improved.

  17. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yihang Yin

    2015-08-01

    Full Text Available Wireless sensor networks (WSNs have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA. First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.

  18. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks.

    Science.gov (United States)

    Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong

    2015-08-07

    Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.

  19. Modeling of Turbine Cycles Using a Neuro-Fuzzy Based Approach to Predict Turbine-Generator Output for Nuclear Power Plants

    Directory of Open Access Journals (Sweden)

    Yea-Kuang Chan

    2012-01-01

    Full Text Available Due to the very complex sets of component systems, interrelated thermodynamic processes and seasonal change in operating conditions, it is relatively difficult to find an accurate model for turbine cycle of nuclear power plants (NPPs. This paper deals with the modeling of turbine cycles to predict turbine-generator output using an adaptive neuro-fuzzy inference system (ANFIS for Unit 1 of the Kuosheng NPP in Taiwan. Plant operation data obtained from Kuosheng NPP between 2006 and 2011 were verified using a linear regression model with a 95% confidence interval. The key parameters of turbine cycle, including turbine throttle pressure, condenser backpressure, feedwater flow rate and final feedwater temperature are selected as inputs for the ANFIS based turbine cycle model. In addition, a thermodynamic turbine cycle model was developed using the commercial software PEPSE® to compare the performance of the ANFIS based turbine cycle model. The results show that the proposed ANFIS based turbine cycle model is capable of accurately estimating turbine-generator output and providing more reliable results than the PEPSE® based turbine cycle models. Moreover, test results show that the ANFIS performed better than the artificial neural network (ANN, which has also being tried to model the turbine cycle. The effectiveness of the proposed neuro-fuzzy based turbine cycle model was demonstrated using the actual operating data of Kuosheng NPP. Furthermore, the results also provide an alternative approach to evaluate the thermal performance of nuclear power plants.

  20. New approach for risk based inspection of H2S based Process Plants

    International Nuclear Information System (INIS)

    Vinod, Gopika; Sharma, Pavan K.; Santosh, T.V.; Hari Prasad, M.; Vaze, K.K.

    2014-01-01

    Highlights: • Study looks into improving the consequence evaluation in risk based inspection. • Ways to revise the quantity factors used in qualitative approach. • New approach based on computational fluid dynamics along with probit mathematics. • Demonstrated this methodology along with a suitable case study for the said issue. - Abstract: Recent trend in risk informed and risk based approaches in life management issues have certainly put the focus on developing estimation methods for real risk. Idea of employing risk as an optimising measure for in-service inspection, termed as risk based inspection, was accepted in principle from late 80s. While applying risk based inspection, consequence of failure from each component needs to be assessed. Consequence evaluation in a Process Plant is a crucial task. It may be noted that, in general, the number of components to be considered for life management is very large and hence the consequence evaluation resulting from their failures (individually) is a laborious task. Screening of critical components is usually carried out using simplified qualitative approach, which primarily uses influence factors for categorisation. This necessitates logical formulation of influence factors and their ranges with a suitable technical basis for acceptance from regulators. This paper describes application of risk based inspection for H 2 S based Process Plant along with the approach devised for handling the influence factor related to the quantity of H 2 S released

  1. On Approaches to Analyze the Sensitivity of Simulated Hydrologic Fluxes to Model Parameters in the Community Land Model

    Directory of Open Access Journals (Sweden)

    Jie Bao

    2015-12-01

    Full Text Available Effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash–Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA approaches, including analysis of variance based on the generalized linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.

  2. A multi-component and multi-failure mode inspection model based on the delay time concept

    International Nuclear Information System (INIS)

    Wang Wenbin; Banjevic, Dragan; Pecht, Michael

    2010-01-01

    The delay time concept and the techniques developed for modelling and optimising plant inspection practices have been reported in many papers and case studies. For a system comprised of many components and subject to many different failure modes, one of the most convenient ways to model the inspection and failure processes is to use a stochastic point process for defect arrivals and a common delay time distribution for the duration between defect the arrival and failure of all defects. This is an approximation, but has been proven to be valid when the number of components is large. However, for a system with just a few key components and subject to few major failure modes, the approximation may be poor. In this paper, a model is developed to address this situation, where each component and failure mode is modelled individually and then pooled together to form the system inspection model. Since inspections are usually scheduled for the whole system rather than individual components, we then formulate the inspection model when the time to the next inspection from the point of a component failure renewal is random. This imposes some complication to the model, and an asymptotic solution was found. Simulation algorithms have also been proposed as a comparison to the analytical results. A numerical example is presented to demonstrate the model.

  3. Composition-Based Prediction of Temperature-Dependent Thermophysical Food Properties: Reevaluating Component Groups and Prediction Models.

    Science.gov (United States)

    Phinney, David Martin; Frelka, John C; Heldman, Dennis Ray

    2017-01-01

    Prediction of temperature-dependent thermophysical properties (thermal conductivity, density, specific heat, and thermal diffusivity) is an important component of process design for food manufacturing. Current models for prediction of thermophysical properties of foods are based on the composition, specifically, fat, carbohydrate, protein, fiber, water, and ash contents, all of which change with temperature. The objectives of this investigation were to reevaluate and improve the prediction expressions for thermophysical properties. Previously published data were analyzed over the temperature range from 10 to 150 °C. These data were analyzed to create a series of relationships between the thermophysical properties and temperature for each food component, as well as to identify the dependence of the thermophysical properties on more specific structural properties of the fats, carbohydrates, and proteins. Results from this investigation revealed that the relationships between the thermophysical properties of the major constituents of foods and temperature can be statistically described by linear expressions, in contrast to the current polynomial models. Links between variability in thermophysical properties and structural properties were observed. Relationships for several thermophysical properties based on more specific constituents have been identified. Distinctions between simple sugars (fructose, glucose, and lactose) and complex carbohydrates (starch, pectin, and cellulose) have been proposed. The relationships between the thermophysical properties and proteins revealed a potential correlation with the molecular weight of the protein. The significance of relating variability in constituent thermophysical properties with structural properties--such as molecular mass--could significantly improve composition-based prediction models and, consequently, the effectiveness of process design. © 2016 Institute of Food Technologists®.

  4. TOGA: A TOUGH code for modeling three-phase, multi-component, and non-isothermal processes involved in CO2-based Enhanced Oil Recovery

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Lehua [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States); Oldenburg, Curtis M. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States)

    2016-10-10

    TOGA is a numerical reservoir simulator for modeling non-isothermal flow and transport of water, CO2, multicomponent oil, and related gas components for applications including CO2-enhanced oil recovery (CO2-EOR) and geologic carbon sequestration in depleted oil and gas reservoirs. TOGA uses an approach based on the Peng-Robinson equation of state (PR-EOS) to calculate the thermophysical properties of the gas and oil phases including the gas/oil components dissolved in the aqueous phase, and uses a mixing model to estimate the thermophysical properties of the aqueous phase. The phase behavior (e.g., occurrence and disappearance of the three phases, gas + oil + aqueous) and the partitioning of non-aqueous components (e.g., CO2, CH4, and n-oil components) between coexisting phases are modeled using K-values derived from assumptions of equal-fugacity that have been demonstrated to be very accurate as shown by comparison to measured data. Models for saturated (water) vapor pressure and water solubility (in the oil phase) are used to calculate the partitioning of the water (H2O) component between the gas and oil phases. All components (e.g., CO2, H2O, and n hydrocarbon components) are allowed to be present in all phases (aqueous, gaseous, and oil). TOGA uses a multiphase version of Darcy’s Law to model flow and transport through porous media of mixtures with up to three phases over a range of pressures and temperatures appropriate to hydrocarbon recovery and geologic carbon sequestration systems. Transport of the gaseous and dissolved components is by advection and Fickian molecular diffusion. New methods for phase partitioning and thermophysical property modeling in TOGA have been validated against experimental data published in the literature for describing phase partitioning and phase behavior. Flow and transport has been verified by testing against related TOUGH2 EOS modules and

  5. Connected Component Model for Multi-Object Tracking.

    Science.gov (United States)

    He, Zhenyu; Li, Xin; You, Xinge; Tao, Dacheng; Tang, Yuan Yan

    2016-08-01

    In multi-object tracking, it is critical to explore the data associations by exploiting the temporal information from a sequence of frames rather than the information from the adjacent two frames. Since straightforwardly obtaining data associations from multi-frames is an NP-hard multi-dimensional assignment (MDA) problem, most existing methods solve this MDA problem by either developing complicated approximate algorithms, or simplifying MDA as a 2D assignment problem based upon the information extracted only from adjacent frames. In this paper, we show that the relation between associations of two observations is the equivalence relation in the data association problem, based on the spatial-temporal constraint that the trajectories of different objects must be disjoint. Therefore, the MDA problem can be equivalently divided into independent subproblems by equivalence partitioning. In contrast to existing works for solving the MDA problem, we develop a connected component model (CCM) by exploiting the constraints of the data association and the equivalence relation on the constraints. Based upon CCM, we can efficiently obtain the global solution of the MDA problem for multi-object tracking by optimizing a sequence of independent data association subproblems. Experiments on challenging public data sets demonstrate that our algorithm outperforms the state-of-the-art approaches.

  6. Modeling the variability of solar radiation data among weather stations by means of principal components analysis

    International Nuclear Information System (INIS)

    Zarzo, Manuel; Marti, Pau

    2011-01-01

    Research highlights: →Principal components analysis was applied to R s data recorded at 30 stations. → Four principal components explain 97% of the data variability. → The latent variables can be fitted according to latitude, longitude and altitude. → The PCA approach is more effective for gap infilling than conventional approaches. → The proposed method allows daily R s estimations at locations in the area of study. - Abstract: Measurements of global terrestrial solar radiation (R s ) are commonly recorded in meteorological stations. Daily variability of R s has to be taken into account for the design of photovoltaic systems and energy efficient buildings. Principal components analysis (PCA) was applied to R s data recorded at 30 stations in the Mediterranean coast of Spain. Due to equipment failures and site operation problems, time series of R s often present data gaps or discontinuities. The PCA approach copes with this problem and allows estimation of present and past values by taking advantage of R s records from nearby stations. The gap infilling performance of this methodology is compared with neural networks and alternative conventional approaches. Four principal components explain 66% of the data variability with respect to the average trajectory (97% if non-centered values are considered). A new method based on principal components regression was also developed for R s estimation if previous measurements are not available. By means of multiple linear regression, it was found that the latent variables associated to the four relevant principal components can be fitted according to the latitude, longitude and altitude of the station where data were recorded from. Additional geographical or climatic variables did not increase the predictive goodness-of-fit. The resulting models allow the estimation of daily R s values at any location in the area under study and present higher accuracy than artificial neural networks and some conventional approaches

  7. Condition monitoring with Mean field independent components analysis

    DEFF Research Database (Denmark)

    Pontoppidan, Niels Henrik; Sigurdsson, Sigurdur; Larsen, Jan

    2005-01-01

    We discuss condition monitoring based on mean field independent components analysis of acoustic emission energy signals. Within this framework it is possible to formulate a generative model that explains the sources, their mixing and also the noise statistics of the observed signals. By using...... a novelty approach we may detect unseen faulty signals as indeed faulty with high precision, even though the model learns only from normal signals. This is done by evaluating the likelihood that the model generated the signals and adapting a simple threshold for decision. Acoustic emission energy signals...... from a large diesel engine is used to demonstrate this approach. The results show that mean field independent components analysis gives a better detection of fault compared to principal components analysis, while at the same time selecting a more compact model...

  8. A model-based combinatorial optimisation approach for energy-efficient processing of microalgae

    NARCIS (Netherlands)

    Slegers, P.M.; Koetzier, B.J.; Fasaei, F.; Wijffels, R.H.; Straten, van G.; Boxtel, van A.J.B.

    2014-01-01

    The analyses of algae biorefinery performance are commonly based on fixed performance data for each processing step. In this work, we demonstrate a model-based combinatorial approach to derive the design-specific upstream energy consumption and biodiesel yield in the production of biodiesel from

  9. Tribocorrosion in pressurized high temperature water: a mass flow model based on the third body approach

    Energy Technology Data Exchange (ETDEWEB)

    Guadalupe Maldonado, S.

    2014-07-01

    Pressurized water reactors (PWR) used for power generation are operated at elevated temperatures (280-300 °C) and under higher pressure (120-150 bar). In addition to these harsh environmental conditions some components of the PWR assemblies are subject to mechanical loading (sliding, vibration and impacts) leading to undesirable and hardly controllable material degradation phenomena. In such situations wear is determined by the complex interplay (tribocorrosion) between mechanical, material and physical-chemical phenomena. Tribocorrosion in PWR conditions is at present little understood and models need to be developed in order to predict component lifetime over several decades. The goal of this project, carried out in collaboration with the French company AREVA NP, is to develop a predictive model based on the mechanistic understanding of tribocorrosion of specific PWR components (stainless steel control assemblies, stellite grippers). The approach taken here is to describe degradation in terms of electro-chemical and mechanical material flows (third body concept of tribology) from the metal into the friction film (i.e. the oxidized film forming during rubbing on the metal surface) and from the friction film into the environment instead of simple mass loss considerations. The project involves the establishment of mechanistic models for describing the single flows based on ad-hoc tribocorrosion measurements operating at low temperature. The overall behaviour at high temperature and pressure in investigated using a dedicated tribometer (Aurore) including electrochemical control of the contact during rubbing. Physical laws describing the individual flows according to defined mechanisms and as a function of defined physical parameters were identified based on the obtained experimental results and from literature data. The physical laws were converted into mass flow rates and solved as differential equation system by considering the mass balance in compartments

  10. Quantification of uncertainties in turbulence modeling: A comparison of physics-based and random matrix theoretic approaches

    International Nuclear Information System (INIS)

    Wang, Jian-Xun; Sun, Rui; Xiao, Heng

    2016-01-01

    Highlights: • Compared physics-based and random matrix methods to quantify RANS model uncertainty. • Demonstrated applications of both methods in channel ow over periodic hills. • Examined the amount of information introduced in the physics-based approach. • Discussed implications to modeling turbulence in both near-wall and separated regions. - Abstract: Numerical models based on Reynolds-Averaged Navier-Stokes (RANS) equations are widely used in engineering turbulence modeling. However, the RANS predictions have large model-form uncertainties for many complex flows, e.g., those with non-parallel shear layers or strong mean flow curvature. Quantification of these large uncertainties originating from the modeled Reynolds stresses has attracted attention in the turbulence modeling community. Recently, a physics-based Bayesian framework for quantifying model-form uncertainties has been proposed with successful applications to several flows. Nonetheless, how to specify proper priors without introducing unwarranted, artificial information remains challenging to the current form of the physics-based approach. Another recently proposed method based on random matrix theory provides the prior distributions with maximum entropy, which is an alternative for model-form uncertainty quantification in RANS simulations. This method has better mathematical rigorousness and provides the most non-committal prior distributions without introducing artificial constraints. On the other hand, the physics-based approach has the advantages of being more flexible to incorporate available physical insights. In this work, we compare and discuss the advantages and disadvantages of the two approaches on model-form uncertainty quantification. In addition, we utilize the random matrix theoretic approach to assess and possibly improve the specification of priors used in the physics-based approach. The comparison is conducted through a test case using a canonical flow, the flow past

  11. GENERIC, COMPONENT FAILURE DATA BASE FOR LIGHT WATER AND LIQUID SODIUM REACTOR PRAs

    Energy Technology Data Exchange (ETDEWEB)

    S. A. Eide; S. V. Chmielewski; T. D. Swantz

    1990-02-01

    A comprehensive generic component failure data base has been developed for light water and liquid sodium reactor probabilistic risk assessments (PRAs) . The Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR) and the Centralized Reliability Data Organization (CREDO) data bases were used to generate component failure rates . Using this approach, most of the failure rates are based on actual plant data rather than existing estimates .

  12. Exploring a minimal two-component p53 model

    International Nuclear Information System (INIS)

    Sun, Tingzhe; Zhu, Feng; Shen, Pingping; Yuan, Ruoshi; Xu, Wei

    2010-01-01

    The tumor suppressor p53 coordinates many attributes of cellular processes via interlocked feedback loops. To understand the biological implications of feedback loops in a p53 system, a two-component model which encompasses essential feedback loops was constructed and further explored. Diverse bifurcation properties, such as bistability and oscillation, emerge by manipulating the feedback strength. The p53-mediated MDM2 induction dictates the bifurcation patterns. We first identified irradiation dichotomy in p53 models and further proposed that bistability and oscillation can behave in a coordinated manner. Further sensitivity analysis revealed that p53 basal production and MDM2-mediated p53 degradation, which are central to cellular control, are most sensitive processes. Also, we identified that the much more significant variations in amplitude of p53 pulses observed in experiments can be derived from overall amplitude parameter sensitivity. The combined approach with bifurcation analysis, stochastic simulation and sampling-based sensitivity analysis not only gives crucial insights into the dynamics of the p53 system, but also creates a fertile ground for understanding the regulatory patterns of other biological networks

  13. Developing a Model Component

    Science.gov (United States)

    Fields, Christina M.

    2013-01-01

    The Spaceport Command and Control System (SCCS) Simulation Computer Software Configuration Item (CSCI) is responsible for providing simulations to support test and verification of SCCS hardware and software. The Universal Coolant Transporter System (UCTS) was a Space Shuttle Orbiter support piece of the Ground Servicing Equipment (GSE). The initial purpose of the UCTS was to provide two support services to the Space Shuttle Orbiter immediately after landing at the Shuttle Landing Facility. The UCTS is designed with the capability of servicing future space vehicles; including all Space Station Requirements necessary for the MPLM Modules. The Simulation uses GSE Models to stand in for the actual systems to support testing of SCCS systems during their development. As an intern at Kennedy Space Center (KSC), my assignment was to develop a model component for the UCTS. I was given a fluid component (dryer) to model in Simulink. I completed training for UNIX and Simulink. The dryer is a Catch All replaceable core type filter-dryer. The filter-dryer provides maximum protection for the thermostatic expansion valve and solenoid valve from dirt that may be in the system. The filter-dryer also protects the valves from freezing up. I researched fluid dynamics to understand the function of my component. The filter-dryer was modeled by determining affects it has on the pressure and velocity of the system. I used Bernoulli's Equation to calculate the pressure and velocity differential through the dryer. I created my filter-dryer model in Simulink and wrote the test script to test the component. I completed component testing and captured test data. The finalized model was sent for peer review for any improvements. I participated in Simulation meetings and was involved in the subsystem design process and team collaborations. I gained valuable work experience and insight into a career path as an engineer.

  14. Component-Based Modelling for Scalable Smart City Systems Interoperability: A Case Study on Integrating Energy Demand Response Systems.

    Science.gov (United States)

    Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan

    2016-10-28

    Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems' architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation.

  15. A Component-Based Approach for Securing Indoor Home Care Applications

    Science.gov (United States)

    Estévez, Elisabet

    2017-01-01

    eHealth systems have adopted recent advances on sensing technologies together with advances in information and communication technologies (ICT) in order to provide people-centered services that improve the quality of life of an increasingly elderly population. As these eHealth services are founded on the acquisition and processing of sensitive data (e.g., personal details, diagnosis, treatments and medical history), any security threat would damage the public’s confidence in them. This paper proposes a solution for the design and runtime management of indoor eHealth applications with security requirements. The proposal allows applications definition customized to patient particularities, including the early detection of health deterioration and suitable reaction (events) as well as security needs. At runtime, security support is twofold. A secured component-based platform supervises applications execution and provides events management, whilst the security of the communications among application components is also guaranteed. Additionally, the proposed event management scheme adopts the fog computing paradigm to enable local event related data storage and processing, thus saving communication bandwidth when communicating with the cloud. As a proof of concept, this proposal has been validated through the monitoring of the health status in diabetic patients at a nursing home. PMID:29278370

  16. A Component-Based Approach for Securing Indoor Home Care Applications

    Directory of Open Access Journals (Sweden)

    Aitor Agirre

    2017-12-01

    Full Text Available eHealth systems have adopted recent advances on sensing technologies together with advances in information and communication technologies (ICT in order to provide people-centered services that improve the quality of life of an increasingly elderly population. As these eHealth services are founded on the acquisition and processing of sensitive data (e.g., personal details, diagnosis, treatments and medical history, any security threat would damage the public’s confidence in them. This paper proposes a solution for the design and runtime management of indoor eHealth applications with security requirements. The proposal allows applications definition customized to patient particularities, including the early detection of health deterioration and suitable reaction (events as well as security needs. At runtime, security support is twofold. A secured component-based platform supervises applications execution and provides events management, whilst the security of the communications among application components is also guaranteed. Additionally, the proposed event management scheme adopts the fog computing paradigm to enable local event related data storage and processing, thus saving communication bandwidth when communicating with the cloud. As a proof of concept, this proposal has been validated through the monitoring of the health status in diabetic patients at a nursing home.

  17. FEM-based neural-network approach to nonlinear modeling with application to longitudinal vehicle dynamics control.

    Science.gov (United States)

    Kalkkuhl, J; Hunt, K J; Fritz, H

    1999-01-01

    An finite-element methods (FEM)-based neural-network approach to Nonlinear AutoRegressive with eXogenous input (NARX) modeling is presented. The method uses multilinear interpolation functions on C0 rectangular elements. The local and global structure of the resulting model is analyzed. It is shown that the model can be interpreted both as a local model network and a single layer feedforward neural network. The main aim is to use the model for nonlinear control design. The proposed FEM NARX description is easily accessible to feedback linearizing control techniques. Its use with a two-degrees of freedom nonlinear internal model controller is discussed. The approach is applied to modeling of the nonlinear longitudinal dynamics of an experimental lorry, using measured data. The modeling results are compared with local model network and multilayer perceptron approaches. A nonlinear speed controller was designed based on the identified FEM model. The controller was implemented in a test vehicle, and several experimental results are presented.

  18. A Unified Approach to Model-Based Planning and Execution

    Science.gov (United States)

    Muscettola, Nicola; Dorais, Gregory A.; Fry, Chuck; Levinson, Richard; Plaunt, Christian; Norvig, Peter (Technical Monitor)

    2000-01-01

    Writing autonomous software is complex, requiring the coordination of functionally and technologically diverse software modules. System and mission engineers must rely on specialists familiar with the different software modules to translate requirements into application software. Also, each module often encodes the same requirement in different forms. The results are high costs and reduced reliability due to the difficulty of tracking discrepancies in these encodings. In this paper we describe a unified approach to planning and execution that we believe provides a unified representational and computational framework for an autonomous agent. We identify the four main components whose interplay provides the basis for the agent's autonomous behavior: the domain model, the plan database, the plan running module, and the planner modules. This representational and problem solving approach can be applied at all levels of the architecture of a complex agent, such as Remote Agent. In the rest of the paper we briefly describe the Remote Agent architecture. The new agent architecture proposed here aims at achieving the full Remote Agent functionality. We then give the fundamental ideas behind the new agent architecture and point out some implication of the structure of the architecture, mainly in the area of reactivity and interaction between reactive and deliberative decision making. We conclude with related work and current status.

  19. A Component-Based Study of the Effect of Diameter on Bond and Anchorage Characteristics of Blind-Bolted Connections.

    Directory of Open Access Journals (Sweden)

    Muhammad Nasir Amin

    Full Text Available Structural hollow sections are gaining worldwide importance due to their structural and architectural advantages over open steel sections. The only obstacle to their use is their connection with other structural members. To overcome the obstacle of tightening the bolt from one side has given birth to the concept of blind bolts. Blind bolts, being the practical solution to the connection hindrance for the use of hollow and concrete filled hollow sections play a vital role. Flowdrill, the Huck High Strength Blind Bolt and the Lindapter Hollobolt are the well-known commercially available blind bolts. Although the development of blind bolts has largely resolved this issue, the use of structural hollow sections remains limited to shear resistance. Therefore, a new modified version of the blind bolt, known as the "Extended Hollo-Bolt" (EHB due to its enhanced capacity for bonding with concrete, can overcome the issue of low moment resistance capacity associated with blind-bolted connections. The load transfer mechanism of this recently developed blind bolt remains unclear, however. This study uses a parametric approach to characterising the EHB, using diameter as the variable parameter. Stiffness and load-carrying capacity were evaluated at two different bolt sizes. To investigate the load transfer mechanism, a component-based study of the bond and anchorage characteristics was performed by breaking down the EHB into its components. The results of the study provide insight into the load transfer mechanism of the blind bolt in question. The proposed component-based model was validated by a spring model, through which the stiffness of the EHB was compared to that of its components combined. The combined stiffness of the components was found to be roughly equivalent to that of the EHB as a whole, validating the use of this component-based approach.

  20. A Component-Based Study of the Effect of Diameter on Bond and Anchorage Characteristics of Blind-Bolted Connections.

    Science.gov (United States)

    Amin, Muhammad Nasir; Zaheer, Salman; Alazba, Abdulrahman Ali; Saleem, Muhammad Umair; Niazi, Muhammad Umar Khan; Khurram, Nauman; Amin, Muhammad Tahir

    2016-01-01

    Structural hollow sections are gaining worldwide importance due to their structural and architectural advantages over open steel sections. The only obstacle to their use is their connection with other structural members. To overcome the obstacle of tightening the bolt from one side has given birth to the concept of blind bolts. Blind bolts, being the practical solution to the connection hindrance for the use of hollow and concrete filled hollow sections play a vital role. Flowdrill, the Huck High Strength Blind Bolt and the Lindapter Hollobolt are the well-known commercially available blind bolts. Although the development of blind bolts has largely resolved this issue, the use of structural hollow sections remains limited to shear resistance. Therefore, a new modified version of the blind bolt, known as the "Extended Hollo-Bolt" (EHB) due to its enhanced capacity for bonding with concrete, can overcome the issue of low moment resistance capacity associated with blind-bolted connections. The load transfer mechanism of this recently developed blind bolt remains unclear, however. This study uses a parametric approach to characterising the EHB, using diameter as the variable parameter. Stiffness and load-carrying capacity were evaluated at two different bolt sizes. To investigate the load transfer mechanism, a component-based study of the bond and anchorage characteristics was performed by breaking down the EHB into its components. The results of the study provide insight into the load transfer mechanism of the blind bolt in question. The proposed component-based model was validated by a spring model, through which the stiffness of the EHB was compared to that of its components combined. The combined stiffness of the components was found to be roughly equivalent to that of the EHB as a whole, validating the use of this component-based approach.

  1. Hierarchical modeling of systems with similar components: A framework for adaptive monitoring and control

    International Nuclear Information System (INIS)

    Memarzadeh, Milad; Pozzi, Matteo; Kolter, J. Zico

    2016-01-01

    System management includes the selection of maintenance actions depending on the available observations: when a system is made up by components known to be similar, data collected on one is also relevant for the management of others. This is typically the case of wind farms, which are made up by similar turbines. Optimal management of wind farms is an important task due to high cost of turbines' operation and maintenance: in this context, we recently proposed a method for planning and learning at system-level, called PLUS, built upon the Partially Observable Markov Decision Process (POMDP) framework, which treats transition and emission probabilities as random variables, and is therefore suitable for including model uncertainty. PLUS models the components as independent or identical. In this paper, we extend that formulation, allowing for a weaker similarity among components. The proposed approach, called Multiple Uncertain POMDP (MU-POMDP), models the components as POMDPs, and assumes the corresponding parameters as dependent random variables. Through this framework, we can calibrate specific degradation and emission models for each component while, at the same time, process observations at system-level. We compare the performance of the proposed MU-POMDP with PLUS, and discuss its potential and computational complexity. - Highlights: • A computational framework is proposed for adaptive monitoring and control. • It adopts a scheme based on Markov Chain Monte Carlo for inference and learning. • Hierarchical Bayesian modeling is used to allow a system-level flow of information. • Results show potential of significant savings in management of wind farms.

  2. A multi-criteria decision analysis approach for importance identification and ranking of network components

    International Nuclear Information System (INIS)

    Almoghathawi, Yasser; Barker, Kash; Rocco, Claudio M.; Nicholson, Charles D.

    2017-01-01

    Analyzing network vulnerability is a key element of network planning in order to be prepared for any disruptive event that might impact the performance of the network. Hence, many importance measures have been proposed to identify the important components in a network with respect to vulnerability and rank them accordingly based on individual importance measure. However, in this paper, we propose a new approach to identify the most important network components based on multiple importance measures using a multi criteria decision making (MCDM) method, namely the technique for order performance by similarity to ideal solution (TOPSIS), able to take into account the preferences of decision-makers. We consider multiple edge-specific flow-based importance measures provided as the multiple criteria of a network where the alternatives are the edges. Accordingly, TOPSIS is used to rank the edges of the network based on their importance considering multiple different importance measures. The proposed approach is illustrated through different networks with different densities along with the effects of weighs. - Highlights: • We integrate several perspectives on network vulnerability to generate a component importance ranking. • We apply these measures to determine the importance of edges after disruptions. • Networks of varying size and density are explored.

  3. Investigation of eddy currents in the components of the dynamic ergodic divertor of TEXTOR using analytical and numerical approaches

    International Nuclear Information System (INIS)

    Giesen, B.; Neubauer, O.; Bondarchuk, E.; Doinikov, N.; Kitaev, B.; Obidenko, T.; Panin, A.

    2003-01-01

    Analytical and numerical approaches for the calculation of eddy currents in mechanical structures of the TEXTOR tokamak in view of operating the dynamic ergodic divertor (DED) coil system fed with the alternating current up to 15 kA at frequencies up to 10 kHz are described. The design of the in-vessel components located close to the DED coils requires detailed investigation of eddy current effects to avoid unacceptable heating and forces. Different approaches depending on skin-layer depths compared with the body dimensions are analyzed. The applied algorithms are based on analytical and simplified numerical methods. Precision and application range of these algorithms have been checked by a numerical code. The simplified technique is rather effective for first step engineering estimation and gives a good understanding for the problem. In a certain parameter range, it results in even precise values and can be used for design optimization of the structures without huge efforts in numerical modeling. After modification of the component's shape prototypes have been manufactured and successfully tested in a full-scale model under the real DED field. The design recommendations resulting from the eddy current studies contributed significantly to the optimized lay out of the DED in-vessel components

  4. Physics based Degradation Modeling and Prognostics of Electrolytic Capacitors under Electrical Overstress Conditions

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper proposes a physics based degradation modeling and prognostics approach for electrolytic capacitors. Electrolytic capacitors are critical components in...

  5. Top-Down and Bottom-Up Approach for Model-Based Testing of Product Lines

    Directory of Open Access Journals (Sweden)

    Stephan Weißleder

    2013-03-01

    Full Text Available Systems tend to become more and more complex. This has a direct impact on system engineering processes. Two of the most important phases in these processes are requirements engineering and quality assurance. Two significant complexity drivers located in these phases are the growing number of product variants that have to be integrated into the requirements engineering and the ever growing effort for manual test design. There are modeling techniques to deal with both complexity drivers like, e.g., feature modeling and model-based test design. Their combination, however, has been seldom the focus of investigation. In this paper, we present two approaches to combine feature modeling and model-based testing as an efficient quality assurance technique for product lines. We present the corresponding difficulties and approaches to overcome them. All explanations are supported by an example of an online shop product line.

  6. Investment in the future electricity system - An agent-based modelling approach

    NARCIS (Netherlands)

    Kraan, O.; Kramer, G. J.; Nikolic, I.

    2018-01-01

    Now that renewable technologies are both technically and commercially mature, the imperfect rational behaviour of investors becomes a critical factor in the future success of the energy transition. Here, we take an agent-based approach to model investor decision making in the electricity sector

  7. A CORBA BASED ARCHITECTURE FOR ACCESSING REUSABLE SOFTWARE COMPONENTS ON THE WEB.

    Directory of Open Access Journals (Sweden)

    R. Cenk ERDUR

    2003-01-01

    Full Text Available In a very near future, as a result of the continious growth of Internet and advances in networking technologies, Internet will become the common software repository for people and organizations who employ component based reuse approach in their software development life cycles. In order to use the reusable components such as source codes, analysis, designs, design patterns during new software development processes, environments that support the identification of the components over Internet are needed. Basic elements of such an environment are the coordinator programs which deliver user requests to appropriate component libraries, user interfaces for querying, and programs that wrap the component libraries. First, a CORBA based architecture is proposed for such an environment. Then, an alternative architecture that is based on the Java 2 platform technologies is given for the same environment. Finally, the two architectures are compared.

  8. Model-based testing in powertrain development; Modellgestuetzte Erprobungsmethodik in der Antriebsstrangentwicklung

    Energy Technology Data Exchange (ETDEWEB)

    Albers, A.; Schyr, C. [Inst. fuer Produktentwicklung der Univ. Karlsruhe (T.H.) (Germany)

    2005-07-01

    The paper describes a new methodical approach for a model-based testing of powertrain components in vehicle development. The presented methodology is based on a holistic model environment which covers the major dynamic effects of the vehicle in an early development phase and refines the models during the testing phase on the different test bed configurations. This allows a realistic loading of the mechanical components and their electronic control units in parallel to a simulation based analysis of design and application variants in the mechanics and software and their influence onto the complete vehicle. In the first application example the development of a pre-adjustable transmission for passenger cars is presented. In the second example the testing concept for tracked vehicles with hydrostatic drivetrain is described. (orig.)

  9. Behavior-based network management: a unique model-based approach to implementing cyber superiority

    Science.gov (United States)

    Seng, Jocelyn M.

    2016-05-01

    Behavior-Based Network Management (BBNM) is a technological and strategic approach to mastering the identification and assessment of network behavior, whether human-driven or machine-generated. Recognizing that all five U.S. Air Force (USAF) mission areas rely on the cyber domain to support, enhance and execute their tasks, BBNM is designed to elevate awareness and improve the ability to better understand the degree of reliance placed upon a digital capability and the operational risk.2 Thus, the objective of BBNM is to provide a holistic view of the digital battle space to better assess the effects of security, monitoring, provisioning, utilization management, allocation to support mission sustainment and change control. Leveraging advances in conceptual modeling made possible by a novel advancement in software design and implementation known as Vector Relational Data Modeling (VRDM™), the BBNM approach entails creating a network simulation in which meaning can be inferred and used to manage network behavior according to policy, such as quickly detecting and countering malicious behavior. Initial research configurations have yielded executable BBNM models as combinations of conceptualized behavior within a network management simulation that includes only concepts of threats and definitions of "good" behavior. A proof of concept assessment called "Lab Rat," was designed to demonstrate the simplicity of network modeling and the ability to perform adaptation. The model was tested on real world threat data and demonstrated adaptive and inferential learning behavior. Preliminary results indicate this is a viable approach towards achieving cyber superiority in today's volatile, uncertain, complex and ambiguous (VUCA) environment.

  10. Learning Algorithms for Audio and Video Processing: Independent Component Analysis and Support Vector Machine Based Approaches

    National Research Council Canada - National Science Library

    Qi, Yuan

    2000-01-01

    In this thesis, we propose two new machine learning schemes, a subband-based Independent Component Analysis scheme and a hybrid Independent Component Analysis/Support Vector Machine scheme, and apply...

  11. Three-dimensional parallel edge-based finite element modeling of electromagnetic data with field redatuming

    DEFF Research Database (Denmark)

    Cai, Hongzhu; Čuma, Martin; Zhdanov, Michael

    2015-01-01

    This paper presents a parallelized version of the edge-based finite element method with a novel post-processing approach for numerical modeling of an electromagnetic field in complex media. The method uses an unstructured tetrahedral mesh which can reduce the number of degrees of freedom signific......This paper presents a parallelized version of the edge-based finite element method with a novel post-processing approach for numerical modeling of an electromagnetic field in complex media. The method uses an unstructured tetrahedral mesh which can reduce the number of degrees of freedom...... significantly. The linear system of finite element equations is solved using parallel direct solvers which are robust for ill-conditioned systems and efficient for multiple source electromagnetic (EM) modeling. We also introduce a novel approach to compute the scalar components of the electric field from...... the tangential components along each edge based on field redatuming. The method can produce a more accurate result as compared to conventional approach. We have applied the developed algorithm to compute the EM response for a typical 3D anisotropic geoelectrical model of the off-shore HC reservoir with complex...

  12. Model-Based Sensor Placement for Component Condition Monitoring and Fault Diagnosis in Fossil Energy Systems

    Energy Technology Data Exchange (ETDEWEB)

    Mobed, Parham [Texas Tech Univ., Lubbock, TX (United States); Pednekar, Pratik [West Virginia Univ., Morgantown, WV (United States); Bhattacharyya, Debangsu [West Virginia Univ., Morgantown, WV (United States); Turton, Richard [West Virginia Univ., Morgantown, WV (United States); Rengaswamy, Raghunathan [Texas Tech Univ., Lubbock, TX (United States)

    2016-01-29

    Design and operation of energy producing, near “zero-emission” coal plants has become a national imperative. This report on model-based sensor placement describes a transformative two-tier approach to identify the optimum placement, number, and type of sensors for condition monitoring and fault diagnosis in fossil energy system operations. The algorithms are tested on a high fidelity model of the integrated gasification combined cycle (IGCC) plant. For a condition monitoring network, whether equipment should be considered at a unit level or a systems level depends upon the criticality of the process equipment, its likeliness to fail, and the level of resolution desired for any specific failure. Because of the presence of a high fidelity model at the unit level, a sensor network can be designed to monitor the spatial profile of the states and estimate fault severity levels. In an IGCC plant, besides the gasifier, the sour water gas shift (WGS) reactor plays an important role. In view of this, condition monitoring of the sour WGS reactor is considered at the unit level, while a detailed plant-wide model of gasification island, including sour WGS reactor and the Selexol process, is considered for fault diagnosis at the system-level. Finally, the developed algorithms unify the two levels and identifies an optimal sensor network that maximizes the effectiveness of the overall system-level fault diagnosis and component-level condition monitoring. This work could have a major impact on the design and operation of future fossil energy plants, particularly at the grassroots level where the sensor network is yet to be identified. In addition, the same algorithms developed in this report can be further enhanced to be used in retrofits, where the objectives could be upgrade (addition of more sensors) and relocation of existing sensors.

  13. Simulation approaches to probabilistic structural design at the component level

    International Nuclear Information System (INIS)

    Stancampiano, P.A.

    1978-01-01

    In this paper, structural failure of large nuclear components is viewed as a random process with a low probability of occurrence. Therefore, a statistical interpretation of probability does not apply and statistical inferences cannot be made due to the sparcity of actual structural failure data. In such cases, analytical estimates of the failure probabilities may be obtained from stress-strength interference theory. Since the majority of real design applications are complex, numerical methods are required to obtain solutions. Monte Carlo simulation appears to be the best general numerical approach. However, meaningful applications of simulation methods suggest research activities in three categories: methods development, failure mode models development, and statistical data models development. (Auth.)

  14. Software Component Clustering and Retrieval: An Entropy-based Fuzzy k-Modes Methodology

    OpenAIRE

    Stylianou, Constantinos; Andreou, Andreas S.

    2008-01-01

    The number of software houses attempting to adopt a component-based development approach is rapidly increasing. However many organisations still find it difficult to complete the shift as it requires them to alter their entire software development process and philosophy. Furthermore, to promote component-based software engineering, organisations must be ready to promote reusability and this can only be attained if the proper framework exists from which a developer can access, search and retri...

  15. Interaction between clay-based shaft seal components and crystalline host rock

    International Nuclear Information System (INIS)

    Priyanto, D.; Dixon, D.; Man, A.

    2010-01-01

    Document available in extended abstract form only. The Government of Canada has accepted the Nuclear Waste Management Organization's (NWMO) recommendation of Adaptive Phased Management (APM) as the long-term management approach for Canada's used nuclear fuel. APM ultimately involves the isolation and containment of used nuclear fuel deep in a Deep Geological Repository (DGR). On completion of waste emplacement operation and during repository closure, shaft seals, comprising clay-based shaft seal components, will be installed at strategic locations, such as where significant fracture zones (FZs) are located. The primary function of a shaft seal is to limit and prevent short-circuiting of the groundwater flow regime via the shaft. Currently, at Atomic Energy of Canada Limited's Underground Research Laboratory (URL) a full-scale shaft seal is being constructed at the intersection of a low dipping thrust fault called FZ 2 as part of the overall URL decommissioning activities. Both crystalline rock and sedimentary rock are considered potentially suitable host rocks formations for a DGR. This paper presents the results of numerical simulation of a shaft seal installed in moderately to sparsely fractured crystalline rock (MFR). The shape and thickness of the shaft seal modelled for a DGR in this exercise are similar to the shaft seal at the URL, but in the modelling exercise it is given a larger diameter (i.e. 7.30 m) equal to the assumed diameter of a production shaft of a repository. The seal consists of a blended bentonite-sand (BS) component that is constrained between two massive concrete seals. Dense backfill (DBF) materials are installed above and below the concrete seals (CS). The concrete seals are keyed into the access shaft to better anchor the concrete units in place and in order to restrain the swelling of the bentonite-sand component of the seal as it hydrates. The reference geosphere in the proposed work is MFR similar to the rock conditions

  16. A Robot Trajectory Optimization Approach for Thermal Barrier Coatings Used for Free-Form Components

    Science.gov (United States)

    Cai, Zhenhua; Qi, Beichun; Tao, Chongyuan; Luo, Jie; Chen, Yuepeng; Xie, Changjun

    2017-10-01

    This paper is concerned with a robot trajectory optimization approach for thermal barrier coatings. As the requirements of high reproducibility of complex workpieces increase, an optimal thermal spraying trajectory should not only guarantee an accurate control of spray parameters defined by users (e.g., scanning speed, spray distance, scanning step, etc.) to achieve coating thickness homogeneity but also help to homogenize the heat transfer distribution on the coating surface. A mesh-based trajectory generation approach is introduced in this work to generate path curves on a free-form component. Then, two types of meander trajectories are generated by performing a different connection method. Additionally, this paper presents a research approach for introducing the heat transfer analysis into the trajectory planning process. Combining heat transfer analysis with trajectory planning overcomes the defects of traditional trajectory planning methods (e.g., local over-heating), which helps form the uniform temperature field by optimizing the time sequence of path curves. The influence of two different robot trajectories on the process of heat transfer is estimated by coupled FEM models which demonstrates the effectiveness of the presented optimization approach.

  17. Validating Timed Component Contracts

    DEFF Research Database (Denmark)

    Le Guilly, Thibaut; Liu, Shaoying; Olsen, Petur

    2015-01-01

    This paper presents a technique for testing software components with contracts that specify functional behavior, synchronization, as well as timing behavior. The approach combines elements from unit testing with model-based testing techniques for timed automata. The technique is implemented...... in an online testing tool, and we demonstrate its use on a concrete use case....

  18. Detailed finite element method modeling of evaporating multi-component droplets

    Energy Technology Data Exchange (ETDEWEB)

    Diddens, Christian, E-mail: C.Diddens@tue.nl

    2017-07-01

    The evaporation of sessile multi-component droplets is modeled with an axisymmetic finite element method. The model comprises the coupled processes of mixture evaporation, multi-component flow with composition-dependent fluid properties and thermal effects. Based on representative examples of water–glycerol and water–ethanol droplets, regular and chaotic examples of solutal Marangoni flows are discussed. Furthermore, the relevance of the substrate thickness for the evaporative cooling of volatile binary mixture droplets is pointed out. It is shown how the evaporation of the more volatile component can drastically decrease the interface temperature, so that ambient vapor of the less volatile component condenses on the droplet. Finally, results of this model are compared with corresponding results of a lubrication theory model, showing that the application of lubrication theory can cause considerable errors even for moderate contact angles of 40°. - Graphical abstract:.

  19. On Two Mixture-Based Clustering Approaches Used in Modeling an Insurance Portfolio

    Directory of Open Access Journals (Sweden)

    Tatjana Miljkovic

    2018-05-01

    Full Text Available We review two complementary mixture-based clustering approaches for modeling unobserved heterogeneity in an insurance portfolio: the generalized linear mixed cluster-weighted model (CWM and mixture-based clustering for an ordered stereotype model (OSM. The latter is for modeling of ordinal variables, and the former is for modeling losses as a function of mixed-type of covariates. The article extends the idea of mixture modeling to a multivariate classification for the purpose of testing unobserved heterogeneity in an insurance portfolio. The application of both methods is illustrated on a well-known French automobile portfolio, in which the model fitting is performed using the expectation-maximization (EM algorithm. Our findings show that these mixture-based clustering methods can be used to further test unobserved heterogeneity in an insurance portfolio and as such may be considered in insurance pricing, underwriting, and risk management.

  20. Multi-component fiber track modelling of diffusion-weighted magnetic resonance imaging data

    Directory of Open Access Journals (Sweden)

    Yasser M. Kadah

    2010-01-01

    Full Text Available In conventional diffusion tensor imaging (DTI based on magnetic resonance data, each voxel is assumed to contain a single component having diffusion properties that can be fully represented by a single tensor. Even though this assumption can be valid in some cases, the general case involves the mixing of components, resulting in significant deviation from the single tensor model. Hence, a strategy that allows the decomposition of data based on a mixture model has the potential of enhancing the diagnostic value of DTI. This project aims to work towards the development and experimental verification of a robust method for solving the problem of multi-component modelling of diffusion tensor imaging data. The new method demonstrates significant error reduction from the single-component model while maintaining practicality for clinical applications, obtaining more accurate Fiber tracking results.

  1. Development of a noise prediction model based on advanced fuzzy approaches in typical industrial workrooms.

    Science.gov (United States)

    Aliabadi, Mohsen; Golmohammadi, Rostam; Khotanlou, Hassan; Mansoorizadeh, Muharram; Salarpour, Amir

    2014-01-01

    Noise prediction is considered to be the best method for evaluating cost-preventative noise controls in industrial workrooms. One of the most important issues is the development of accurate models for analysis of the complex relationships among acoustic features affecting noise level in workrooms. In this study, advanced fuzzy approaches were employed to develop relatively accurate models for predicting noise in noisy industrial workrooms. The data were collected from 60 industrial embroidery workrooms in the Khorasan Province, East of Iran. The main acoustic and embroidery process features that influence the noise were used to develop prediction models using MATLAB software. Multiple regression technique was also employed and its results were compared with those of fuzzy approaches. Prediction errors of all prediction models based on fuzzy approaches were within the acceptable level (lower than one dB). However, Neuro-fuzzy model (RMSE=0.53dB and R2=0.88) could slightly improve the accuracy of noise prediction compared with generate fuzzy model. Moreover, fuzzy approaches provided more accurate predictions than did regression technique. The developed models based on fuzzy approaches as useful prediction tools give professionals the opportunity to have an optimum decision about the effectiveness of acoustic treatment scenarios in embroidery workrooms.

  2. Multi-scale diffuse interface modeling of multi-component two-phase flow with partial miscibility

    Science.gov (United States)

    Kou, Jisheng; Sun, Shuyu

    2016-08-01

    In this paper, we introduce a diffuse interface model to simulate multi-component two-phase flow with partial miscibility based on a realistic equation of state (e.g. Peng-Robinson equation of state). Because of partial miscibility, thermodynamic relations are used to model not only interfacial properties but also bulk properties, including density, composition, pressure, and realistic viscosity. As far as we know, this effort is the first time to use diffuse interface modeling based on equation of state for modeling of multi-component two-phase flow with partial miscibility. In numerical simulation, the key issue is to resolve the high contrast of scales from the microscopic interface composition to macroscale bulk fluid motion since the interface has a nanoscale thickness only. To efficiently solve this challenging problem, we develop a multi-scale simulation method. At the microscopic scale, we deduce a reduced interfacial equation under reasonable assumptions, and then we propose a formulation of capillary pressure, which is consistent with macroscale flow equations. Moreover, we show that Young-Laplace equation is an approximation of this capillarity formulation, and this formulation is also consistent with the concept of Tolman length, which is a correction of Young-Laplace equation. At the macroscopical scale, the interfaces are treated as discontinuous surfaces separating two phases of fluids. Our approach differs from conventional sharp-interface two-phase flow model in that we use the capillary pressure directly instead of a combination of surface tension and Young-Laplace equation because capillarity can be calculated from our proposed capillarity formulation. A compatible condition is also derived for the pressure in flow equations. Furthermore, based on the proposed capillarity formulation, we design an efficient numerical method for directly computing the capillary pressure between two fluids composed of multiple components. Finally, numerical tests

  3. Multi-scale diffuse interface modeling of multi-component two-phase flow with partial miscibility

    KAUST Repository

    Kou, Jisheng

    2016-05-10

    In this paper, we introduce a diffuse interface model to simulate multi-component two-phase flow with partial miscibility based on a realistic equation of state (e.g. Peng-Robinson equation of state). Because of partial miscibility, thermodynamic relations are used to model not only interfacial properties but also bulk properties, including density, composition, pressure, and realistic viscosity. As far as we know, this effort is the first time to use diffuse interface modeling based on equation of state for modeling of multi-component two-phase flow with partial miscibility. In numerical simulation, the key issue is to resolve the high contrast of scales from the microscopic interface composition to macroscale bulk fluid motion since the interface has a nanoscale thickness only. To efficiently solve this challenging problem, we develop a multi-scale simulation method. At the microscopic scale, we deduce a reduced interfacial equation under reasonable assumptions, and then we propose a formulation of capillary pressure, which is consistent with macroscale flow equations. Moreover, we show that Young-Laplace equation is an approximation of this capillarity formulation, and this formulation is also consistent with the concept of Tolman length, which is a correction of Young-Laplace equation. At the macroscopical scale, the interfaces are treated as discontinuous surfaces separating two phases of fluids. Our approach differs from conventional sharp-interface two-phase flow model in that we use the capillary pressure directly instead of a combination of surface tension and Young-Laplace equation because capillarity can be calculated from our proposed capillarity formulation. A compatible condition is also derived for the pressure in flow equations. Furthermore, based on the proposed capillarity formulation, we design an efficient numerical method for directly computing the capillary pressure between two fluids composed of multiple components. Finally, numerical tests

  4. Modeling Psychological Contract Violation using Dual Regime Models: An Event-based Approach.

    Science.gov (United States)

    Hofmans, Joeri

    2017-01-01

    A good understanding of the dynamics of psychological contract violation requires theories, research methods and statistical models that explicitly recognize that violation feelings follow from an event that violates one's acceptance limits, after which interpretative processes are set into motion, determining the intensity of these violation feelings. Whereas theories-in the form of the dynamic model of the psychological contract-and research methods-in the form of daily diary research and experience sampling research-are available by now, the statistical tools to model such a two-stage process are still lacking. The aim of the present paper is to fill this gap in the literature by introducing two statistical models-the Zero-Inflated model and the Hurdle model-that closely mimic the theoretical process underlying the elicitation violation feelings via two model components: a binary distribution that models whether violation has occurred or not, and a count distribution that models how severe the negative impact is. Moreover, covariates can be included for both model components separately, which yields insight into their unique and shared antecedents. By doing this, the present paper offers a methodological-substantive synergy, showing how sophisticated methodology can be used to examine an important substantive issue.

  5. Computer-aided process planning in prismatic shape die components based on Standard for the Exchange of Product model data

    Directory of Open Access Journals (Sweden)

    Awais Ahmad Khan

    2015-11-01

    Full Text Available Insufficient technologies made good integration between the die components in design, process planning, and manufacturing impossible in the past few years. Nowadays, the advanced technologies based on Standard for the Exchange of Product model data are making it possible. This article discusses the three main steps for achieving the complete process planning for prismatic parts of the die components. These three steps are data extraction, feature recognition, and process planning. The proposed computer-aided process planning system works as part of an integrated system to cover the process planning of any prismatic part die component. The system is built using Visual Basic with EWDraw system for visualizing the Standard for the Exchange of Product model data file. The system works successfully and can cover any type of sheet metal die components. The case study discussed in this article is taken from a large design of progressive die.

  6. Software Components and Formal Methods from a Computational Viewpoint

    OpenAIRE

    Lambertz, Christian

    2012-01-01

    Software components and the methodology of component-based development offer a promising approach to master the design complexity of huge software products because they separate the concerns of software architecture from individual component behavior and allow for reusability of components. In combination with formal methods, the specification of a formal component model of the later software product or system allows for establishing and verifying important system properties in an automatic a...

  7. A Study on Modeling Approaches in Discrete Event Simulation Using Design Patterns

    National Research Council Canada - National Science Library

    Kim, Leng Koh

    2007-01-01

    .... This modeling paradigm encompasses several modeling approaches active role of events, entities as independent components, and chaining components to enable interactivity that are excellent ways of building a DES system...

  8. Model-free prediction and regression a transformation-based approach to inference

    CERN Document Server

    Politis, Dimitris N

    2015-01-01

    The Model-Free Prediction Principle expounded upon in this monograph is based on the simple notion of transforming a complex dataset to one that is easier to work with, e.g., i.i.d. or Gaussian. As such, it restores the emphasis on observable quantities, i.e., current and future data, as opposed to unobservable model parameters and estimates thereof, and yields optimal predictors in diverse settings such as regression and time series. Furthermore, the Model-Free Bootstrap takes us beyond point prediction in order to construct frequentist prediction intervals without resort to unrealistic assumptions such as normality. Prediction has been traditionally approached via a model-based paradigm, i.e., (a) fit a model to the data at hand, and (b) use the fitted model to extrapolate/predict future data. Due to both mathematical and computational constraints, 20th century statistical practice focused mostly on parametric models. Fortunately, with the advent of widely accessible powerful computing in the late 1970s, co...

  9. State space model extraction of thermohydraulic systems – Part I: A linear graph approach

    International Nuclear Information System (INIS)

    Uren, K.R.; Schoor, G. van

    2013-01-01

    Thermohydraulic simulation codes are increasingly making use of graphical design interfaces. The user can quickly and easily design a thermohydraulic system by placing symbols on the screen resembling system components. These components can then be connected to form a system representation. Such system models may then be used to obtain detailed simulations of the physical system. Usually this kind of simulation models are too complex and not ideal for control system design. Therefore, a need exists for automated techniques to extract lumped parameter models useful for control system design. The goal of this first paper, in a two part series, is to propose a method that utilises a graphical representation of a thermohydraulic system, and a lumped parameter modelling approach, to extract state space models. In this methodology each physical domain of the thermohydraulic system is represented by a linear graph. These linear graphs capture the interaction between all components within and across energy domains – hydraulic, thermal and mechanical. These linear graphs are analysed using a graph-theoretic approach to derive reduced order state space models. These models capture the dominant dynamics of the thermohydraulic system and are ideal for control system design purposes. The proposed state space model extraction method is demonstrated by considering a U-tube system. A non-linear state space model is extracted representing both the hydraulic and thermal domain dynamics of the system. The simulated state space model is compared with a Flownex ® model of the U-tube. Flownex ® is a validated systems thermal-fluid simulation software package. - Highlights: • A state space model extraction methodology based on graph-theoretic concepts. • An energy-based approach to consider multi-domain systems in a common framework. • Allow extraction of transparent (white-box) state space models automatically. • Reduced order models containing only independent state

  10. Beyond GLMs: a generative mixture modeling approach to neural system identification.

    Directory of Open Access Journals (Sweden)

    Lucas Theis

    Full Text Available Generalized linear models (GLMs represent a popular choice for the probabilistic characterization of neural spike responses. While GLMs are attractive for their computational tractability, they also impose strong assumptions and thus only allow for a limited range of stimulus-response relationships to be discovered. Alternative approaches exist that make only very weak assumptions but scale poorly to high-dimensional stimulus spaces. Here we seek an approach which can gracefully interpolate between the two extremes. We extend two frequently used special cases of the GLM-a linear and a quadratic model-by assuming that the spike-triggered and non-spike-triggered distributions can be adequately represented using Gaussian mixtures. Because we derive the model from a generative perspective, its components are easy to interpret as they correspond to, for example, the spike-triggered distribution and the interspike interval distribution. The model is able to capture complex dependencies on high-dimensional stimuli with far fewer parameters than other approaches such as histogram-based methods. The added flexibility comes at the cost of a non-concave log-likelihood. We show that in practice this does not have to be an issue and the mixture-based model is able to outperform generalized linear and quadratic models.

  11. Hybrid forecasting of chaotic processes: Using machine learning in conjunction with a knowledge-based model

    Science.gov (United States)

    Pathak, Jaideep; Wikner, Alexander; Fussell, Rebeckah; Chandra, Sarthak; Hunt, Brian R.; Girvan, Michelle; Ott, Edward

    2018-04-01

    A model-based approach to forecasting chaotic dynamical systems utilizes knowledge of the mechanistic processes governing the dynamics to build an approximate mathematical model of the system. In contrast, machine learning techniques have demonstrated promising results for forecasting chaotic systems purely from past time series measurements of system state variables (training data), without prior knowledge of the system dynamics. The motivation for this paper is the potential of machine learning for filling in the gaps in our underlying mechanistic knowledge that cause widely-used knowledge-based models to be inaccurate. Thus, we here propose a general method that leverages the advantages of these two approaches by combining a knowledge-based model and a machine learning technique to build a hybrid forecasting scheme. Potential applications for such an approach are numerous (e.g., improving weather forecasting). We demonstrate and test the utility of this approach using a particular illustrative version of a machine learning known as reservoir computing, and we apply the resulting hybrid forecaster to a low-dimensional chaotic system, as well as to a high-dimensional spatiotemporal chaotic system. These tests yield extremely promising results in that our hybrid technique is able to accurately predict for a much longer period of time than either its machine-learning component or its model-based component alone.

  12. Approach to Organizational Structure Modelling in Construction Companies

    Directory of Open Access Journals (Sweden)

    Ilin Igor V.

    2016-01-01

    Full Text Available Effective management system is one of the key factors of business success nowadays. Construction companies usually have a portfolio of independent projects running at the same time. Thus it is reasonable to take into account project orientation of such kind of business while designing the construction companies’ management system, which main components are business process system and organizational structure. The paper describes the management structure designing approach, based on the project-oriented nature of the construction projects, and propose a model of the organizational structure for the construction company. Application of the proposed approach will enable to assign responsibilities within the organizational structure in construction projects effectively and thus to shorten the time for projects allocation and to provide its smoother running. The practical case of using the approach also provided in the paper.

  13. Intuitionistic fuzzy-based model for failure detection.

    Science.gov (United States)

    Aikhuele, Daniel O; Turan, Faiz B M

    2016-01-01

    In identifying to-be-improved product component(s), the customer/user requirements which are mainly considered, and achieved through customer surveys using the quality function deployment (QFD) tool, often fail to guarantee or cover aspects of the product reliability. Even when they do, there are always many misunderstandings. To improve the product reliability and quality during product redesigning phase and to create that novel product(s) for the customers, the failure information of the existing product, and its component(s) should ordinarily be analyzed and converted to appropriate design knowledge for the design engineer. In this paper, a new intuitionistic fuzzy multi-criteria decision-making method has been proposed. The new approach which is based on an intuitionistic fuzzy TOPSIS model uses an exponential-related function for the computation of the separation measures from the intuitionistic fuzzy positive ideal solution (IFPIS) and intuitionistic fuzzy negative ideal solution (IFNIS) of alternatives. The proposed method has been applied to two practical case studies, and the result from the different cases has been compared with some similar computational approaches in the literature.

  14. Analyzing energy consumption of wireless networks. A model-based approach

    Energy Technology Data Exchange (ETDEWEB)

    Yue, Haidi

    2013-03-04

    During the last decades, wireless networking has been continuously a hot topic both in academy and in industry. Many different wireless networks have been introduced like wireless local area networks, wireless personal networks, wireless ad hoc networks, and wireless sensor networks. If these networks want to have a long term usability, the power consumed by the wireless devices in each of these networks needs to be managed efficiently. Hence, a lot of effort has been carried out for the analysis and improvement of energy efficiency, either for a specific network layer (protocol), or new cross-layer designs. In this thesis, we apply model-based approach for the analysis of energy consumption of different wireless protocols. The protocols under consideration are: one leader election protocol, one routing protocol, and two medium access control protocols. By model-based approach we mean that all these four protocols are formalized as some formal models, more precisely, as discrete-time Markov chains (DTMCs), Markov decision processes (MDPs), or stochastic timed automata (STA). For the first two models, DTMCs and MDPs, we model them in PRISM, a prominent model checker for probabilistic model checking, and apply model checking technique to analyze them. Model checking belongs to the family of formal methods. It discovers exhaustively all possible (reachable) states of the models, and checks whether these models meet a given specification. Specifications are system properties that we want to study, usually expressed by some logics, for instance, probabilistic computer tree logic (PCTL). However, while model checking relies on rigorous mathematical foundations and automatically explores the entire state space of a model, its applicability is also limited by the so-called state space explosion problem -- even systems of moderate size often yield models with an exponentially larger state space that thwart their analysis. Hence for the STA models in this thesis, since there

  15. Combustion engine diagnosis model-based condition monitoring of gasoline and diesel engines and their components

    CERN Document Server

    Isermann, Rolf

    2017-01-01

    This book offers first a short introduction to advanced supervision, fault detection and diagnosis methods. It then describes model-based methods of fault detection and diagnosis for the main components of gasoline and diesel engines, such as the intake system, fuel supply, fuel injection, combustion process, turbocharger, exhaust system and exhaust gas aftertreatment. Additionally, model-based fault diagnosis of electrical motors, electric, pneumatic and hydraulic actuators and fault-tolerant systems is treated. In general series production sensors are used. It includes abundant experimental results showing the detection and diagnosis quality of implemented faults. Written for automotive engineers in practice, it is also of interest to graduate students of mechanical and electrical engineering and computer science. The Content Introduction.- I SUPERVISION, FAULT DETECTION AND DIAGNOSIS METHODS.- Supervision, Fault-Detection and Fault-Diagnosis Methods - a short Introduction.- II DIAGNOSIS OF INTERNAL COMBUST...

  16. Optimal inverse magnetorheological damper modeling using shuffled frog-leaping algorithm–based adaptive neuro-fuzzy inference system approach

    Directory of Open Access Journals (Sweden)

    Xiufang Lin

    2016-08-01

    Full Text Available Magnetorheological dampers have become prominent semi-active control devices for vibration mitigation of structures which are subjected to severe loads. However, the damping force cannot be controlled directly due to the inherent nonlinear characteristics of the magnetorheological dampers. Therefore, for fully exploiting the capabilities of the magnetorheological dampers, one of the challenging aspects is to develop an accurate inverse model which can appropriately predict the input voltage to control the damping force. In this article, a hybrid modeling strategy combining shuffled frog-leaping algorithm and adaptive-network-based fuzzy inference system is proposed to model the inverse dynamic characteristics of the magnetorheological dampers for improving the modeling accuracy. The shuffled frog-leaping algorithm is employed to optimize the premise parameters of the adaptive-network-based fuzzy inference system while the consequent parameters are tuned by a least square estimation method, here known as shuffled frog-leaping algorithm-based adaptive-network-based fuzzy inference system approach. To evaluate the effectiveness of the proposed approach, the inverse modeling results based on the shuffled frog-leaping algorithm-based adaptive-network-based fuzzy inference system approach are compared with those based on the adaptive-network-based fuzzy inference system and genetic algorithm–based adaptive-network-based fuzzy inference system approaches. Analysis of variance test is carried out to statistically compare the performance of the proposed methods and the results demonstrate that the shuffled frog-leaping algorithm-based adaptive-network-based fuzzy inference system strategy outperforms the other two methods in terms of modeling (training accuracy and checking accuracy.

  17. Advances in model-based software for simulating ultrasonic immersion inspections of metal components

    Science.gov (United States)

    Chiou, Chien-Ping; Margetan, Frank J.; Taylor, Jared L.; Engle, Brady J.; Roberts, Ronald A.

    2018-04-01

    Under the sponsorship of the National Science Foundation's Industry/University Cooperative Research Center at ISU, an effort was initiated in 2015 to repackage existing research-grade software into user-friendly tools for the rapid estimation of signal-to-noise ratio (SNR) for ultrasonic inspections of metals. The software combines: (1) a Python-based graphical user interface for specifying an inspection scenario and displaying results; and (2) a Fortran-based engine for computing defect signals and backscattered grain noise characteristics. The later makes use the Thompson-Gray measurement model for the response from an internal defect, and the Thompson-Margetan independent scatterer model for backscattered grain noise. This paper, the third in the series [1-2], provides an overview of the ongoing modeling effort with emphasis on recent developments. These include the ability to: (1) treat microstructures where grain size, shape and tilt relative to the incident sound direction can all vary with depth; and (2) simulate C-scans of defect signals in the presence of backscattered grain noise. The simulation software can now treat both normal and oblique-incidence immersion inspections of curved metal components. Both longitudinal and shear-wave inspections are treated. The model transducer can either be planar, spherically-focused, or bi-cylindrically-focused. A calibration (or reference) signal is required and is used to deduce the measurement system efficiency function. This can be "invented" by the software using center frequency and bandwidth information specified by the user, or, alternatively, a measured calibration signal can be used. Defect types include flat-bottomed-hole reference reflectors, and spherical pores and inclusions. Simulation outputs include estimated defect signal amplitudes, root-mean-square values of grain noise amplitudes, and SNR as functions of the depth of the defect within the metal component. At any particular depth, the user can view

  18. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  19. A Bayesian approach for parameter estimation and prediction using a computationally intensive model

    International Nuclear Information System (INIS)

    Higdon, Dave; McDonnell, Jordan D; Schunck, Nicolas; Sarich, Jason; Wild, Stefan M

    2015-01-01

    Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model η(θ), where θ denotes the uncertain, best input setting. Hence the statistical model is of the form y=η(θ)+ϵ, where ϵ accounts for measurement, and possibly other, error sources. When nonlinearity is present in η(⋅), the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model η(⋅). This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. We also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory. (paper)

  20. COMDES-II: A Component-Based Framework for Generative Development of Distributed Real-Time Control Systems

    DEFF Research Database (Denmark)

    Ke, Xu; Sierszecki, Krzysztof; Angelov, Christo K.

    2007-01-01

    The paper presents a generative development methodology and component models of COMDES-II, a component-based software framework for distributed embedded control systems with real-time constraints. The adopted methodology allows for rapid modeling and validation of control software at a higher lev...... methodology for COMDES-II from a general perspective, describes the component models in details and demonstrates their application through a DC-Motor control system case study.......The paper presents a generative development methodology and component models of COMDES-II, a component-based software framework for distributed embedded control systems with real-time constraints. The adopted methodology allows for rapid modeling and validation of control software at a higher level...

  1. Scale modeling flow-induced vibrations of reactor components

    International Nuclear Information System (INIS)

    Mulcahy, T.M.

    1982-06-01

    Similitude relationships currently employed in the design of flow-induced vibration scale-model tests of nuclear reactor components are reviewed. Emphasis is given to understanding the origins of the similitude parameters as a basis for discussion of the inevitable distortions which occur in design verification testing of entire reactor systems and in feature testing of individual component designs for the existence of detrimental flow-induced vibration mechanisms. Distortions of similitude parameters made in current test practice are enumerated and selected example tests are described. Also, limitations in the use of specific distortions in model designs are evaluated based on the current understanding of flow-induced vibration mechanisms and structural response

  2. Optimal design of multi-state weighted k-out-of-n systems based on component design

    International Nuclear Information System (INIS)

    Li Wei; Zuo, Ming J.

    2008-01-01

    This paper presents a study on design optimization of multi-state weighted k-out-of-n systems. The studied system reliability model is more general than the traditional k-out-of-n system model. The system and its components are capable of assuming a whole range of performance levels, varying from perfect functioning to complete failure. A utility value corresponding to each state is used to indicate the corresponding performance level. A widely studied reliability optimization problem is the 'component selection problem', which involves selection of components with known reliability and cost characteristics. Less adequately addressed has been the problem of determining system cost and utility based on the relationships between component reliability, cost and utility. This paper addresses this topic. All the optimization problems dealt with in this paper can be categorized as either minimizing the expected total system cost subject to system reliability requirements, or maximizing system reliability subject to total system cost limitation. The resulting optimization problems are too complicated to be solved by traditional optimization approaches; therefore, genetic algorithm (GA) is used to solve them. Our results show that GA is a powerful tool for solving these kinds of problems

  3. A Bio-Inspired Model-Based Approach for Context-Aware Post-WIMP Tele-Rehabilitation

    Directory of Open Access Journals (Sweden)

    Víctor López-Jaquero

    2016-10-01

    Full Text Available Tele-rehabilitation is one of the main domains where Information and Communication Technologies (ICT have been proven useful to move healthcare from care centers to patients’ home. Moreover, patients, especially those carrying out a physical therapy, cannot use a traditional Window, Icon, Menu, Pointer (WIMP system, but they need to interact in a natural way, that is, there is a need to move from WIMP systems to Post-WIMP ones. Moreover, tele-rehabilitation systems should be developed following the context-aware approach, so that they are able to adapt to the patients’ context to provide them with usable and effective therapies. In this work a model-based approach is presented to assist stakeholders in the development of context-aware Post-WIMP tele-rehabilitation systems. It entails three different models: (i a task model for designing the rehabilitation tasks; (ii a context model to facilitate the adaptation of these tasks to the context; and (iii a bio-inspired presentation model to specify thoroughly how such tasks should be performed by the patients. Our proposal overcomes one of the limitations of the model-based approach for the development of context-aware systems supporting the specification of non-functional requirements. Finally, a case study is used to illustrate how this proposal can be put into practice to design a real world rehabilitation task.

  4. Hierarchical mixture of experts and diagnostic modeling approach to reduce hydrologic model structural uncertainty: STRUCTURAL UNCERTAINTY DIAGNOSTICS

    Energy Technology Data Exchange (ETDEWEB)

    Moges, Edom [Civil and Environmental Engineering Department, Washington State University, Richland Washington USA; Demissie, Yonas [Civil and Environmental Engineering Department, Washington State University, Richland Washington USA; Li, Hong-Yi [Hydrology Group, Pacific Northwest National Laboratory, Richland Washington USA

    2016-04-01

    In most water resources applications, a single model structure might be inadequate to capture the dynamic multi-scale interactions among different hydrological processes. Calibrating single models for dynamic catchments, where multiple dominant processes exist, can result in displacement of errors from structure to parameters, which in turn leads to over-correction and biased predictions. An alternative to a single model structure is to develop local expert structures that are effective in representing the dominant components of the hydrologic process and adaptively integrate them based on an indicator variable. In this study, the Hierarchical Mixture of Experts (HME) framework is applied to integrate expert model structures representing the different components of the hydrologic process. Various signature diagnostic analyses are used to assess the presence of multiple dominant processes and the adequacy of a single model, as well as to identify the structures of the expert models. The approaches are applied for two distinct catchments, the Guadalupe River (Texas) and the French Broad River (North Carolina) from the Model Parameter Estimation Experiment (MOPEX), using different structures of the HBV model. The results show that the HME approach has a better performance over the single model for the Guadalupe catchment, where multiple dominant processes are witnessed through diagnostic measures. Whereas, the diagnostics and aggregated performance measures prove that French Broad has a homogeneous catchment response, making the single model adequate to capture the response.

  5. A Nonparametric Operational Risk Modeling Approach Based on Cornish-Fisher Expansion

    Directory of Open Access Journals (Sweden)

    Xiaoqian Zhu

    2014-01-01

    Full Text Available It is generally accepted that the choice of severity distribution in loss distribution approach has a significant effect on the operational risk capital estimation. However, the usually used parametric approaches with predefined distribution assumption might be not able to fit the severity distribution accurately. The objective of this paper is to propose a nonparametric operational risk modeling approach based on Cornish-Fisher expansion. In this approach, the samples of severity are generated by Cornish-Fisher expansion and then used in the Monte Carlo simulation to sketch the annual operational loss distribution. In the experiment, the proposed approach is employed to calculate the operational risk capital charge for the overall Chinese banking. The experiment dataset is the most comprehensive operational risk dataset in China as far as we know. The results show that the proposed approach is able to use the information of high order moments and might be more effective and stable than the usually used parametric approach.

  6. General model for Pc-based simulation of PWR and BWR plant components

    Energy Technology Data Exchange (ETDEWEB)

    Ratemi, W M; Abomustafa, A M [Faculty of enginnering, alfateh univerity Tripoli, (Libyan Arab Jamahiriya)

    1995-10-01

    In this paper, we present a basic mathematical model derived from physical principles to suit the simulation of PWR-components such as pressurizer, intact steam generator, ruptured steam generator, and the reactor component of a BWR-plant. In our development, we produced an NMMS-package for nuclear modular modelling simulation. Such package is installed on a personal computer and it is designed to be user friendly through color graphics windows interfacing. The package works under three environments, namely, pre-processor, simulation, and post-processor. Our analysis of results using cross graphing technique for steam generator tube rupture (SGTR) accident, yielded a new proposal for on-line monitoring of control strategy of SGTR-accident for nuclear or conventional power plant. 4 figs.

  7. Pattern-based approach for logical traffic isolation forensic modelling

    CSIR Research Space (South Africa)

    Dlamini, I

    2009-08-01

    Full Text Available reusability and flexibility of the LTI model. This model is viewed as a three-tier architecture, which for experimental purposes is composed of the following components: traffic generator, DiffServ network and the sink server. The Mediator pattern is used...

  8. Modelling and Generating Ajax Applications : A Model-Driven Approach

    NARCIS (Netherlands)

    Gharavi, V.; Mesbah, A.; Van Deursen, A.

    2008-01-01

    Preprint of paper published in: IWWOST 2008 - 7th International Workshop on Web-Oriented Software Technologies, 14-15 July 2008 AJAX is a promising and rapidly evolving approach for building highly interactive web applications. In AJAX, user interface components and the event-based interaction

  9. Blended Risk Approach in Applying PSA Models to Risk-Based Regulations

    International Nuclear Information System (INIS)

    Dimitrijevic, V. B.; Chapman, J. R.

    1996-01-01

    In this paper, the authors will discuss a modern approach in applying PSA models in risk-based regulation. The Blended Risk Approach is a combination of traditional and probabilistic processes. It is receiving increased attention in different industries in the U. S. and abroad. The use of the deterministic regulations and standards provides a proven and well understood basis on which to assess and communicate the impact of change to plant design and operation. Incorporation of traditional values into risk evaluation is working very well in the blended approach. This approach is very application specific. It includes multiple risk attributes, qualitative risk analysis, and basic deterministic principles. In blending deterministic and probabilistic principles, this approach ensures that the objectives of the traditional defense-in-depth concept are not compromised and the design basis of the plant is explicitly considered. (author)

  10. Agent-Based Approach for Modelling the Labour Migration from China to Russia

    Directory of Open Access Journals (Sweden)

    Valeriy Leonidovich Makarov

    2017-06-01

    Full Text Available The article describes the process of labour migration from China to Russia and shows its modelling using the agent-based approach. This approach allows us to simulate an artificial society in a computer program taking into account the diversity of individuals under consideration, as well as to model a set of laws and rules of conduct that make up the institutional environment in which the members of this society live. A brief review and analysis of agent-based migration models presented in the foreign literature are given. The agent-based model of labour migration from China to Russia developed by the Central Economic Mathematical Institute of the Russian Academy of Sciences simulates human behaviour close to reality, which is based on their internal purposes, determining the agents choice of territory as a place of residence. Therefore, at the development of the agents of the model and their behaviour algorithms, as well as the organization of the environment in which they exist and interact, the main characteristics of the population of two neighbouring countries and their demographic processes have been considered. Using the model, two experiments have been conducted. The purpose of the first of them was to assess the effect of depreciation of the rubble against the yuan on the overall indexes of labour migration, as well as its structure. In the second experiment, the procedure of the search of the information by agents for the migratory decision-making was changing. Namely, all generalizing information on the average salary by types of activity and skill level of employees, both in China and Russia, became available to all agents irrespective of their qualification level.

  11. A regularized, model-based approach to phase-based conductivity mapping using MRI.

    Science.gov (United States)

    Ropella, Kathleen M; Noll, Douglas C

    2017-11-01

    To develop a novel regularized, model-based approach to phase-based conductivity mapping that uses structural information to improve the accuracy of conductivity maps. The inverse of the three-dimensional Laplacian operator is used to model the relationship between measured phase maps and the object conductivity in a penalized weighted least-squares optimization problem. Spatial masks based on structural information are incorporated into the problem to preserve data near boundaries. The proposed Inverse Laplacian method was compared against a restricted Gaussian filter in simulation, phantom, and human experiments. The Inverse Laplacian method resulted in lower reconstruction bias and error due to noise in simulations than the Gaussian filter. The Inverse Laplacian method also produced conductivity maps closer to the measured values in a phantom and with reduced noise in the human brain, as compared to the Gaussian filter. The Inverse Laplacian method calculates conductivity maps with less noise and more accurate values near boundaries. Improving the accuracy of conductivity maps is integral for advancing the applications of conductivity mapping. Magn Reson Med 78:2011-2021, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  12. Predictive based monitoring of nuclear plant component degradation using support vector regression

    International Nuclear Information System (INIS)

    Agarwal, Vivek; Alamaniotis, Miltiadis; Tsoukalas, Lefteri H.

    2015-01-01

    Nuclear power plants (NPPs) are large installations comprised of many active and passive assets. Degradation monitoring of all these assets is expensive (labor cost) and highly demanding task. In this paper a framework based on Support Vector Regression (SVR) for online surveillance of critical parameter degradation of NPP components is proposed. In this case, on time replacement or maintenance of components will prevent potential plant malfunctions, and reduce the overall operational cost. In the current work, we apply SVR equipped with a Gaussian kernel function to monitor components. Monitoring includes the one-step-ahead prediction of the component's respective operational quantity using the SVR model, while the SVR model is trained using a set of previous recorded degradation histories of similar components. Predictive capability of the model is evaluated upon arrival of a sensor measurement, which is compared to the component failure threshold. A maintenance decision is based on a fuzzy inference system that utilizes three parameters: (i) prediction evaluation in the previous steps, (ii) predicted value of the current step, (iii) and difference of current predicted value with components failure thresholds. The proposed framework will be tested on turbine blade degradation data.

  13. A Formal Approach for RT-DVS Algorithms Evaluation Based on Statistical Model Checking

    Directory of Open Access Journals (Sweden)

    Shengxin Dai

    2015-01-01

    Full Text Available Energy saving is a crucial concern in embedded real time systems. Many RT-DVS algorithms have been proposed to save energy while preserving deadline guarantees. This paper presents a novel approach to evaluate RT-DVS algorithms using statistical model checking. A scalable framework is proposed for RT-DVS algorithms evaluation, in which the relevant components are modeled as stochastic timed automata, and the evaluation metrics including utilization bound, energy efficiency, battery awareness, and temperature awareness are expressed as statistical queries. Evaluation of these metrics is performed by verifying the corresponding queries using UPPAAL-SMC and analyzing the statistical information provided by the tool. We demonstrate the applicability of our framework via a case study of five classical RT-DVS algorithms.

  14. Excellent approach to modeling urban expansion by fuzzy cellular automata: agent base model

    Science.gov (United States)

    Khajavigodellou, Yousef; Alesheikh, Ali A.; Mohammed, Abdulrazak A. S.; Chapi, Kamran

    2014-09-01

    Recently, the interaction between humans and their environment is the one of important challenges in the world. Landuse/ cover change (LUCC) is a complex process that includes actors and factors at different social and spatial levels. The complexity and dynamics of urban systems make the applicable practice of urban modeling very difficult. With the increased computational power and the greater availability of spatial data, micro-simulation such as the agent based and cellular automata simulation methods, has been developed by geographers, planners, and scholars, and it has shown great potential for representing and simulating the complexity of the dynamic processes involved in urban growth and land use change. This paper presents Fuzzy Cellular Automata in Geospatial Information System and remote Sensing to simulated and predicted urban expansion pattern. These FCA-based dynamic spatial urban models provide an improved ability to forecast and assess future urban growth and to create planning scenarios, allowing us to explore the potential impacts of simulations that correspond to urban planning and management policies. A fuzzy inference guided cellular automata approach. Semantic or linguistic knowledge on Land use change is expressed as fuzzy rules, based on which fuzzy inference is applied to determine the urban development potential for each pixel. The model integrates an ABM (agent-based model) and FCA (Fuzzy Cellular Automata) to investigate a complex decision-making process and future urban dynamic processes. Based on this model rapid development and green land protection under the influences of the behaviors and decision modes of regional authority agents, real estate developer agents, resident agents and non- resident agents and their interactions have been applied to predict the future development patterns of the Erbil metropolitan region.

  15. Principal component analysis acceleration of rovibrational coarse-grain models for internal energy excitation and dissociation

    Science.gov (United States)

    Bellemans, Aurélie; Parente, Alessandro; Magin, Thierry

    2018-04-01

    The present work introduces a novel approach for obtaining reduced chemistry representations of large kinetic mechanisms in strong non-equilibrium conditions. The need for accurate reduced-order models arises from compression of large ab initio quantum chemistry databases for their use in fluid codes. The method presented in this paper builds on existing physics-based strategies and proposes a new approach based on the combination of a simple coarse grain model with Principal Component Analysis (PCA). The internal energy levels of the chemical species are regrouped in distinct energy groups with a uniform lumping technique. Following the philosophy of machine learning, PCA is applied on the training data provided by the coarse grain model to find an optimally reduced representation of the full kinetic mechanism. Compared to recently published complex lumping strategies, no expert judgment is required before the application of PCA. In this work, we will demonstrate the benefits of the combined approach, stressing its simplicity, reliability, and accuracy. The technique is demonstrated by reducing the complex quantum N2(g+1Σ) -N(S4u ) database for studying molecular dissociation and excitation in strong non-equilibrium. Starting from detailed kinetics, an accurate reduced model is developed and used to study non-equilibrium properties of the N2(g+1Σ) -N(S4u ) system in shock relaxation simulations.

  16. Automatic writer identification using connected-component contours and edge-based features of uppercase Western script.

    Science.gov (United States)

    Schomaker, Lambert; Bulacu, Marius

    2004-06-01

    In this paper, a new technique for offline writer identification is presented, using connected-component contours (COCOCOs or CO3s) in uppercase handwritten samples. In our model, the writer is considered to be characterized by a stochastic pattern generator, producing a family of connected components for the uppercase character set. Using a codebook of CO3s from an independent training set of 100 writers, the probability-density function (PDF) of CO3s was computed for an independent test set containing 150 unseen writers. Results revealed a high-sensitivity of the CO3 PDF for identifying individual writers on the basis of a single sentence of uppercase characters. The proposed automatic approach bridges the gap between image-statistics approaches on one end and manually measured allograph features of individual characters on the other end. Combining the CO3 PDF with an independent edge-based orientation and curvature PDF yielded very high correct identification rates.

  17. A Deep Learning based Approach to Reduced Order Modeling of Fluids using LSTM Neural Networks

    Science.gov (United States)

    Mohan, Arvind; Gaitonde, Datta

    2017-11-01

    Reduced Order Modeling (ROM) can be used as surrogates to prohibitively expensive simulations to model flow behavior for long time periods. ROM is predicated on extracting dominant spatio-temporal features of the flow from CFD or experimental datasets. We explore ROM development with a deep learning approach, which comprises of learning functional relationships between different variables in large datasets for predictive modeling. Although deep learning and related artificial intelligence based predictive modeling techniques have shown varied success in other fields, such approaches are in their initial stages of application to fluid dynamics. Here, we explore the application of the Long Short Term Memory (LSTM) neural network to sequential data, specifically to predict the time coefficients of Proper Orthogonal Decomposition (POD) modes of the flow for future timesteps, by training it on data at previous timesteps. The approach is demonstrated by constructing ROMs of several canonical flows. Additionally, we show that statistical estimates of stationarity in the training data can indicate a priori how amenable a given flow-field is to this approach. Finally, the potential and limitations of deep learning based ROM approaches will be elucidated and further developments discussed.

  18. Innovation Networks New Approaches in Modelling and Analyzing

    CERN Document Server

    Pyka, Andreas

    2009-01-01

    The science of graphs and networks has become by now a well-established tool for modelling and analyzing a variety of systems with a large number of interacting components. Starting from the physical sciences, applications have spread rapidly to the natural and social sciences, as well as to economics, and are now further extended, in this volume, to the concept of innovations, viewed broadly. In an abstract, systems-theoretical approach, innovation can be understood as a critical event which destabilizes the current state of the system, and results in a new process of self-organization leading to a new stable state. The contributions to this anthology address different aspects of the relationship between innovation and networks. The various chapters incorporate approaches in evolutionary economics, agent-based modeling, social network analysis and econophysics and explore the epistemic tension between insights into economics and society-related processes, and the insights into new forms of complex dynamics.

  19. An open, object-based modeling approach for simulating subsurface heterogeneity

    Science.gov (United States)

    Bennett, J.; Ross, M.; Haslauer, C. P.; Cirpka, O. A.

    2017-12-01

    Characterization of subsurface heterogeneity with respect to hydraulic and geochemical properties is critical in hydrogeology as their spatial distribution controls groundwater flow and solute transport. Many approaches of characterizing subsurface heterogeneity do not account for well-established geological concepts about the deposition of the aquifer materials; those that do (i.e. process-based methods) often require forcing parameters that are difficult to derive from site observations. We have developed a new method for simulating subsurface heterogeneity that honors concepts of sequence stratigraphy, resolves fine-scale heterogeneity and anisotropy of distributed parameters, and resembles observed sedimentary deposits. The method implements a multi-scale hierarchical facies modeling framework based on architectural element analysis, with larger features composed of smaller sub-units. The Hydrogeological Virtual Reality simulator (HYVR) simulates distributed parameter models using an object-based approach. Input parameters are derived from observations of stratigraphic morphology in sequence type-sections. Simulation outputs can be used for generic simulations of groundwater flow and solute transport, and for the generation of three-dimensional training images needed in applications of multiple-point geostatistics. The HYVR algorithm is flexible and easy to customize. The algorithm was written in the open-source programming language Python, and is intended to form a code base for hydrogeological researchers, as well as a platform that can be further developed to suit investigators' individual needs. This presentation will encompass the conceptual background and computational methods of the HYVR algorithm, the derivation of input parameters from site characterization, and the results of groundwater flow and solute transport simulations in different depositional settings.

  20. Thermochemical modelling of multi-component systems

    International Nuclear Information System (INIS)

    Sundman, B.; Gueneau, C.

    2015-01-01

    Computational thermodynamic, also known as the Calphad method, is a standard tool in industry for the development of materials and improving processes and there is an intense scientific development of new models and databases. The calculations are based on thermodynamic models of the Gibbs energy for each phase as a function of temperature, pressure and constitution. Model parameters are stored in databases that are developed in an international scientific collaboration. In this way, consistent and reliable data for many properties like heat capacity, chemical potentials, solubilities etc. can be obtained for multi-component systems. A brief introduction to this technique is given here and references to more extensive documentation are provided. (authors)

  1. Lifetime assessment of thick-walled components made of nickel-base alloys under near-service loading conditions

    International Nuclear Information System (INIS)

    Hueggenberg, Daniel

    2015-01-01

    Until 2050 the renewable energies should provide 80% of the power in Germany according to Renewable Energy law. Due to that reason the conventional power plants are not used for base load, but rather for the supply of average and peak load. The change of the operating mode leads to shorter times at stationary temperatures and the number of faster start-ups/shut-downs of the power plants will increase. As a result of this the components are exposed to an interacting load of creep and fatigue which reduces the lifetimes. The aim of this thesis is the development and verification of a lifetime assessment procedure for components made of the nickel-base alloys Alloy 617 mod. and Alloy 263 under creep fatigue loading conditions based on numerical phenomenological models and on the approaches of different standards/recommendations. The focus lies on two components of the high temperature material test rig II (HWT II), a header made of Alloy 617 mod. and Alloy 263 as well as a formed part made of Alloy 617 mod. For the basis characterization of the HWT II melts, specimens of the Alloy 617 mod. and Alloy 263 are tested in uniaxial tensile tests, (creep-)fatigue tests, creep tests and charpy tests in a temperature range between 20 C and 725 C. From the comparisons of the test results and the material specifications respectively the results of the projects COORETEC DE4, MARCKO DE2 and MARCKO700 no deviations were obvious for both materials with the exception of the creep test results with Alloy 617 mod. material. The creep tests with Alloy 617 mod. material of the HWT II melt show differences regarding the deformation and damage behavior. In addition to the basis characterization tests some complex lab tests for the characterization of the material behavior under creep-fatigue and multiaxial loading conditions were conducted. The developments of the microstructure, the precipitations as well as the structure of dislocations are investigated in the light optical microscope

  2. Introducing spatial information into predictive NF-kappaB modelling--an agent-based approach.

    Directory of Open Access Journals (Sweden)

    Mark Pogson

    2008-06-01

    Full Text Available Nature is governed by local interactions among lower-level sub-units, whether at the cell, organ, organism, or colony level. Adaptive system behaviour emerges via these interactions, which integrate the activity of the sub-units. To understand the system level it is necessary to understand the underlying local interactions. Successful models of local interactions at different levels of biological organisation, including epithelial tissue and ant colonies, have demonstrated the benefits of such 'agent-based' modelling. Here we present an agent-based approach to modelling a crucial biological system--the intracellular NF-kappaB signalling pathway. The pathway is vital to immune response regulation, and is fundamental to basic survival in a range of species. Alterations in pathway regulation underlie a variety of diseases, including atherosclerosis and arthritis. Our modelling of individual molecules, receptors and genes provides a more comprehensive outline of regulatory network mechanisms than previously possible with equation-based approaches. The method also permits consideration of structural parameters in pathway regulation; here we predict that inhibition of NF-kappaB is directly affected by actin filaments of the cytoskeleton sequestering excess inhibitors, therefore regulating steady-state and feedback behaviour.

  3. A semiparametric graphical modelling approach for large-scale equity selection.

    Science.gov (United States)

    Liu, Han; Mulvey, John; Zhao, Tianqi

    2016-01-01

    We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.

  4. Nonlinear Modeling of the PEMFC Based On NNARX Approach

    OpenAIRE

    Shan-Jen Cheng; Te-Jen Chang; Kuang-Hsiung Tan; Shou-Ling Kuo

    2015-01-01

    Polymer Electrolyte Membrane Fuel Cell (PEMFC) is such a time-vary nonlinear dynamic system. The traditional linear modeling approach is hard to estimate structure correctly of PEMFC system. From this reason, this paper presents a nonlinear modeling of the PEMFC using Neural Network Auto-regressive model with eXogenous inputs (NNARX) approach. The multilayer perception (MLP) network is applied to evaluate the structure of the NNARX model of PEMFC. The validity and accurac...

  5. Improving stability of prediction models based on correlated omics data by using network approaches.

    Directory of Open Access Journals (Sweden)

    Renaud Tissier

    Full Text Available Building prediction models based on complex omics datasets such as transcriptomics, proteomics, metabolomics remains a challenge in bioinformatics and biostatistics. Regularized regression techniques are typically used to deal with the high dimensionality of these datasets. However, due to the presence of correlation in the datasets, it is difficult to select the best model and application of these methods yields unstable results. We propose a novel strategy for model selection where the obtained models also perform well in terms of overall predictability. Several three step approaches are considered, where the steps are 1 network construction, 2 clustering to empirically derive modules or pathways, and 3 building a prediction model incorporating the information on the modules. For the first step, we use weighted correlation networks and Gaussian graphical modelling. Identification of groups of features is performed by hierarchical clustering. The grouping information is included in the prediction model by using group-based variable selection or group-specific penalization. We compare the performance of our new approaches with standard regularized regression via simulations. Based on these results we provide recommendations for selecting a strategy for building a prediction model given the specific goal of the analysis and the sizes of the datasets. Finally we illustrate the advantages of our approach by application of the methodology to two problems, namely prediction of body mass index in the DIetary, Lifestyle, and Genetic determinants of Obesity and Metabolic syndrome study (DILGOM and prediction of response of each breast cancer cell line to treatment with specific drugs using a breast cancer cell lines pharmacogenomics dataset.

  6. Toward a Model-Based Approach to Flight System Fault Protection

    Science.gov (United States)

    Day, John; Murray, Alex; Meakin, Peter

    2012-01-01

    Fault Protection (FP) is a distinct and separate systems engineering sub-discipline that is concerned with the off-nominal behavior of a system. Flight system fault protection is an important part of the overall flight system systems engineering effort, with its own products and processes. As with other aspects of systems engineering, the FP domain is highly amenable to expression and management in models. However, while there are standards and guidelines for performing FP related analyses, there are not standards or guidelines for formally relating the FP analyses to each other or to the system hardware and software design. As a result, the material generated for these analyses are effectively creating separate models that are only loosely-related to the system being designed. Development of approaches that enable modeling of FP concerns in the same model as the system hardware and software design enables establishment of formal relationships that has great potential for improving the efficiency, correctness, and verification of the implementation of flight system FP. This paper begins with an overview of the FP domain, and then continues with a presentation of a SysML/UML model of the FP domain and the particular analyses that it contains, by way of showing a potential model-based approach to flight system fault protection, and an exposition of the use of the FP models in FSW engineering. The analyses are small examples, inspired by current real-project examples of FP analyses.

  7. Improved predictive model for n-decane kinetics across species, as a component of hydrocarbon mixtures.

    Science.gov (United States)

    Merrill, E A; Gearhart, J M; Sterner, T R; Robinson, P J

    2008-07-01

    n-Decane is considered a major component of various fuels and industrial solvents. These hydrocarbon products are complex mixtures of hundreds of components, including straight-chain alkanes, branched chain alkanes, cycloalkanes, diaromatics, and naphthalenes. Human exposures to the jet fuel, JP-8, or to industrial solvents in vapor, aerosol, and liquid forms all have the potential to produce health effects, including immune suppression and/or neurological deficits. A physiologically based pharmacokinetic (PBPK) model has previously been developed for n-decane, in which partition coefficients (PC), fitted to 4-h exposure kinetic data, were used in preference to measured values. The greatest discrepancy between fitted and measured values was for fat, where PC values were changed from 250-328 (measured) to 25 (fitted). Such a large change in a critical parameter, without any physiological basis, greatly impedes the model's extrapolative abilities, as well as its applicability for assessing the interactions of n-decane or similar alkanes with other compounds in a mixture model. Due to these limitations, the model was revised. Our approach emphasized the use of experimentally determined PCs because many tissues had not approached steady-state concentrations by the end of the 4-h exposures. Diffusion limitation was used to describe n-decane kinetics for the brain, perirenal fat, skin, and liver. Flow limitation was used to describe the remaining rapidly and slowly perfused tissues. As expected from the high lipophilicity of this semivolatile compound (log K(ow) = 5.25), sensitivity analyses showed that parameters describing fat uptake were next to blood:air partitioning and pulmonary ventilation as critical in determining overall systemic circulation and uptake in other tissues. In our revised model, partitioning into fat took multiple days to reach steady state, which differed considerably from the previous model that assumed steady-state conditions in fat at 4 h post

  8. A Multi-Model Approach for System Diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad; Bækgaard, Mikkel Ask Buur

    2007-01-01

    A multi-model approach for system diagnosis is presented in this paper. The relation with fault diagnosis as well as performance validation is considered. The approach is based on testing a number of pre-described models and find which one is the best. It is based on an active approach......,i.e. an auxiliary input to the system is applied. The multi-model approach is applied on a wind turbine system....

  9. Hybrid Neural Network Approach Based Tool for the Modelling of Photovoltaic Panels

    Directory of Open Access Journals (Sweden)

    Antonino Laudani

    2015-01-01

    Full Text Available A hybrid neural network approach based tool for identifying the photovoltaic one-diode model is presented. The generalization capabilities of neural networks are used together with the robustness of the reduced form of one-diode model. Indeed, from the studies performed by the authors and the works present in the literature, it was found that a direct computation of the five parameters via multiple inputs and multiple outputs neural network is a very difficult task. The reduced form consists in a series of explicit formulae for the support to the neural network that, in our case, is aimed at predicting just two parameters among the five ones identifying the model: the other three parameters are computed by reduced form. The present hybrid approach is efficient from the computational cost point of view and accurate in the estimation of the five parameters. It constitutes a complete and extremely easy tool suitable to be implemented in a microcontroller based architecture. Validations are made on about 10000 PV panels belonging to the California Energy Commission database.

  10. Contact- and distance-based principal component analysis of protein dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Ernst, Matthias; Sittel, Florian; Stock, Gerhard, E-mail: stock@physik.uni-freiburg.de [Biomolecular Dynamics, Institute of Physics, Albert Ludwigs University, 79104 Freiburg (Germany)

    2015-12-28

    To interpret molecular dynamics simulations of complex systems, systematic dimensionality reduction methods such as principal component analysis (PCA) represent a well-established and popular approach. Apart from Cartesian coordinates, internal coordinates, e.g., backbone dihedral angles or various kinds of distances, may be used as input data in a PCA. Adopting two well-known model problems, folding of villin headpiece and the functional dynamics of BPTI, a systematic study of PCA using distance-based measures is presented which employs distances between C{sub α}-atoms as well as distances between inter-residue contacts including side chains. While this approach seems prohibitive for larger systems due to the quadratic scaling of the number of distances with the size of the molecule, it is shown that it is sufficient (and sometimes even better) to include only relatively few selected distances in the analysis. The quality of the PCA is assessed by considering the resolution of the resulting free energy landscape (to identify metastable conformational states and barriers) and the decay behavior of the corresponding autocorrelation functions (to test the time scale separation of the PCA). By comparing results obtained with distance-based, dihedral angle, and Cartesian coordinates, the study shows that the choice of input variables may drastically influence the outcome of a PCA.

  11. Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis

    Science.gov (United States)

    Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang

    2017-07-01

    In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.

  12. Modeling Psychological Contract Violation using Dual Regime Models: An Event-based Approach

    Directory of Open Access Journals (Sweden)

    Joeri Hofmans

    2017-11-01

    Full Text Available A good understanding of the dynamics of psychological contract violation requires theories, research methods and statistical models that explicitly recognize that violation feelings follow from an event that violates one's acceptance limits, after which interpretative processes are set into motion, determining the intensity of these violation feelings. Whereas theories—in the form of the dynamic model of the psychological contract—and research methods—in the form of daily diary research and experience sampling research—are available by now, the statistical tools to model such a two-stage process are still lacking. The aim of the present paper is to fill this gap in the literature by introducing two statistical models—the Zero-Inflated model and the Hurdle model—that closely mimic the theoretical process underlying the elicitation violation feelings via two model components: a binary distribution that models whether violation has occurred or not, and a count distribution that models how severe the negative impact is. Moreover, covariates can be included for both model components separately, which yields insight into their unique and shared antecedents. By doing this, the present paper offers a methodological-substantive synergy, showing how sophisticated methodology can be used to examine an important substantive issue.

  13. Comparing model-based and model-free analysis methods for QUASAR arterial spin labeling perfusion quantification.

    Science.gov (United States)

    Chappell, Michael A; Woolrich, Mark W; Petersen, Esben T; Golay, Xavier; Payne, Stephen J

    2013-05-01

    Amongst the various implementations of arterial spin labeling MRI methods for quantifying cerebral perfusion, the QUASAR method is unique. By using a combination of labeling with and without flow suppression gradients, the QUASAR method offers the separation of macrovascular and tissue signals. This permits local arterial input functions to be defined and "model-free" analysis, using numerical deconvolution, to be used. However, it remains unclear whether arterial spin labeling data are best treated using model-free or model-based analysis. This work provides a critical comparison of these two approaches for QUASAR arterial spin labeling in the healthy brain. An existing two-component (arterial and tissue) model was extended to the mixed flow suppression scheme of QUASAR to provide an optimal model-based analysis. The model-based analysis was extended to incorporate dispersion of the labeled bolus, generally regarded as the major source of discrepancy between the two analysis approaches. Model-free and model-based analyses were compared for perfusion quantification including absolute measurements, uncertainty estimation, and spatial variation in cerebral blood flow estimates. Major sources of discrepancies between model-free and model-based analysis were attributed to the effects of dispersion and the degree to which the two methods can separate macrovascular and tissue signal. Copyright © 2012 Wiley Periodicals, Inc.

  14. Fault diagnosis of generation IV nuclear HTGR components – Part II: The area error enthalpy–entropy graph approach

    International Nuclear Information System (INIS)

    Rand, C.P. du; Schoor, G. van

    2012-01-01

    Highlights: ► Different uncorrelated fault signatures are derived for HTGR component faults. ► A multiple classifier ensemble increases confidence in classification accuracy. ► Detailed simulation model of system is not required for fault diagnosis. - Abstract: The second paper in a two part series presents the area error method for generation of representative enthalpy–entropy (h–s) fault signatures to classify malfunctions in generation IV nuclear high temperature gas-cooled reactor (HTGR) components. The second classifier is devised to ultimately address the fault diagnosis (FD) problem via the proposed methods in a multiple classifier (MC) ensemble. FD is realized by way of different input feature sets to the classification algorithm based on the area and trajectory of the residual shift between the fault-free and the actual operating h–s graph models. The application of the proposed technique is specifically demonstrated for 24 single fault transients considered in the main power system (MPS) of the Pebble Bed Modular Reactor (PBMR). The results show that the area error technique produces different fault signatures with low correlation for all the examined component faults. A brief evaluation of the two fault signature generation techniques is presented and the performance of the area error method is documented using the fault classification index (FCI) presented in Part I of the series. The final part of this work reports the application of the proposed approach for classification of an emulated fault transient in data from the prototype Pebble Bed Micro Model (PBMM) plant. Reference data values are calculated for the plant via a thermo-hydraulic simulation model of the MPS. The results show that the correspondence between the fault signatures, generated via experimental plant data and simulated reference values, are generally good. The work presented in the two part series, related to the classification of component faults in the MPS of different

  15. Modeling the evaporation of sessile multi-component droplets

    NARCIS (Netherlands)

    Diddens, C.; Kuerten, Johannes G.M.; van der Geld, C.W.M.; Wijshoff, H.M.A.

    2017-01-01

    We extended a mathematical model for the drying of sessile droplets, based on the lubrication approximation, to binary mixture droplets. This extension is relevant for e.g. inkjet printing applications, where ink consisting of several components are used. The extension involves the generalization of

  16. Use of Annotations for Component and Framework Interoperability

    Science.gov (United States)

    David, O.; Lloyd, W.; Carlson, J.; Leavesley, G. H.; Geter, F.

    2009-12-01

    The popular programming languages Java and C# provide annotations, a form of meta-data construct. Software frameworks for web integration, web services, database access, and unit testing now take advantage of annotations to reduce the complexity of APIs and the quantity of integration code between the application and framework infrastructure. Adopting annotation features in frameworks has been observed to lead to cleaner and leaner application code. The USDA Object Modeling System (OMS) version 3.0 fully embraces the annotation approach and additionally defines a meta-data standard for components and models. In version 3.0 framework/model integration previously accomplished using API calls is now achieved using descriptive annotations. This enables the framework to provide additional functionality non-invasively such as implicit multithreading, and auto-documenting capabilities while achieving a significant reduction in the size of the model source code. Using a non-invasive methodology leads to models and modeling components with only minimal dependencies on the modeling framework. Since models and modeling components are not directly bound to framework by the use of specific APIs and/or data types they can more easily be reused both within the framework as well as outside of it. To study the effectiveness of an annotation based framework approach with other modeling frameworks, a framework-invasiveness study was conducted to evaluate the effects of framework design on model code quality. A monthly water balance model was implemented across several modeling frameworks and several software metrics were collected. The metrics selected were measures of non-invasive design methods for modeling frameworks from a software engineering perspective. It appears that the use of annotations positively impacts several software quality measures. In a next step, the PRMS model was implemented in OMS 3.0 and is currently being implemented for water supply forecasting in the

  17. Symbolic Solution Approach to Wind Turbine based on Doubly Fed Induction Generator Model

    DEFF Research Database (Denmark)

    Cañas–Carretón, M.; Gómez–Lázaro, E.; Martín–Martínez, S.

    2015-01-01

    –order induction generator is selected to model the electric machine, being this approach suitable to estimate the DFIG performance under transient conditions. The corresponding non–linear integro-differential equation system has been reduced to a linear state-space system by using an ad-hoc local linearization......This paper describes an alternative approach based on symbolic computations to simulate wind turbines equipped with Doubly–Fed Induction Generator (DFIG). The actuator disk theory is used to represent the aerodynamic part, and the one-mass model simulates the mechanical part. The 5th...

  18. Using the Dynamic Model to develop an evidence-based and theory-driven approach to school improvement

    NARCIS (Netherlands)

    Creemers, B.P.M.; Kyriakides, L.

    2010-01-01

    This paper refers to a dynamic perspective of educational effectiveness and improvement stressing the importance of using an evidence-based and theory-driven approach. Specifically, an approach to school improvement based on the dynamic model of educational effectiveness is offered. The recommended

  19. Calculations of atomic magnetic nuclear shielding constants based on the two-component normalized elimination of the small component method

    Science.gov (United States)

    Yoshizawa, Terutaka; Zou, Wenli; Cremer, Dieter

    2017-04-01

    A new method for calculating nuclear magnetic resonance shielding constants of relativistic atoms based on the two-component (2c), spin-orbit coupling including Dirac-exact NESC (Normalized Elimination of the Small Component) approach is developed where each term of the diamagnetic and paramagnetic contribution to the isotropic shielding constant σi s o is expressed in terms of analytical energy derivatives with regard to the magnetic field B and the nuclear magnetic moment 𝝁 . The picture change caused by renormalization of the wave function is correctly described. 2c-NESC/HF (Hartree-Fock) results for the σiso values of 13 atoms with a closed shell ground state reveal a deviation from 4c-DHF (Dirac-HF) values by 0.01%-0.76%. Since the 2-electron part is effectively calculated using a modified screened nuclear shielding approach, the calculation is efficient and based on a series of matrix manipulations scaling with (2M)3 (M: number of basis functions).

  20. Day-Ahead Crude Oil Price Forecasting Using a Novel Morphological Component Analysis Based Model

    Directory of Open Access Journals (Sweden)

    Qing Zhu

    2014-01-01

    Full Text Available As a typical nonlinear and dynamic system, the crude oil price movement is difficult to predict and its accurate forecasting remains the subject of intense research activity. Recent empirical evidence suggests that the multiscale data characteristics in the price movement are another important stylized fact. The incorporation of mixture of data characteristics in the time scale domain during the modelling process can lead to significant performance improvement. This paper proposes a novel morphological component analysis based hybrid methodology for modeling the multiscale heterogeneous characteristics of the price movement in the crude oil markets. Empirical studies in two representative benchmark crude oil markets reveal the existence of multiscale heterogeneous microdata structure. The significant performance improvement of the proposed algorithm incorporating the heterogeneous data characteristics, against benchmark random walk, ARMA, and SVR models, is also attributed to the innovative methodology proposed to incorporate this important stylized fact during the modelling process. Meanwhile, work in this paper offers additional insights into the heterogeneous market microstructure with economic viable interpretations.

  1. Uniframe: A Unified Framework for Developing Service-Oriented, Component-Based Distributed Software Systems

    National Research Council Canada - National Science Library

    Raje, Rajeev R; Olson, Andrew M; Bryant, Barrett R; Burt, Carol C; Auguston, Makhail

    2005-01-01

    .... It describes how this approach employs a unifying framework for specifying such systems to unite the concepts of service-oriented architectures, a component-based software engineering methodology...

  2. Taxonomy-Based Approaches to Quality Assurance of Ontologies

    Directory of Open Access Journals (Sweden)

    Michael Halper

    2017-01-01

    Full Text Available Ontologies are important components of health information management systems. As such, the quality of their content is of paramount importance. It has been proven to be practical to develop quality assurance (QA methodologies based on automated identification of sets of concepts expected to have higher likelihood of errors. Four kinds of such sets (called QA-sets organized around the themes of complex and uncommonly modeled concepts are introduced. A survey of different methodologies based on these QA-sets and the results of applying them to various ontologies are presented. Overall, following these approaches leads to higher QA yields and better utilization of QA personnel. The formulation of additional QA-set methodologies will further enhance the suite of available ontology QA tools.

  3. A component-based system for agricultural drought monitoring by remote sensing.

    Science.gov (United States)

    Dong, Heng; Li, Jun; Yuan, Yanbin; You, Lin; Chen, Chao

    2017-01-01

    In recent decades, various kinds of remote sensing-based drought indexes have been proposed and widely used in the field of drought monitoring. However, the drought-related software and platform development lag behind the theoretical research. The current drought monitoring systems focus mainly on information management and publishing, and cannot implement professional drought monitoring or parameter inversion modelling, especially the models based on multi-dimensional feature space. In view of the above problems, this paper aims at fixing this gap with a component-based system named RSDMS to facilitate the application of drought monitoring by remote sensing. The system is designed and developed based on Component Object Model (COM) to ensure the flexibility and extendibility of modules. RSDMS realizes general image-related functions such as data management, image display, spatial reference management, image processing and analysis, and further provides drought monitoring and evaluation functions based on internal and external models. Finally, China's Ningxia region is selected as the study area to validate the performance of RSDMS. The experimental results show that RSDMS provide an efficient and scalable support to agricultural drought monitoring.

  4. Implementation of a VLSI Level Zero Processing system utilizing the functional component approach

    Science.gov (United States)

    Shi, Jianfei; Horner, Ward P.; Grebowsky, Gerald J.; Chesney, James R.

    1991-01-01

    A high rate Level Zero Processing system is currently being prototyped at NASA/Goddard Space Flight Center (GSFC). Based on state-of-the-art VLSI technology and the functional component approach, the new system promises capabilities of handling multiple Virtual Channels and Applications with a combined data rate of up to 20 Megabits per second (Mbps) at low cost.

  5. Complementary Constrains on Component based Multiphase Flow Problems, Should It Be Implemented Locally or Globally?

    Science.gov (United States)

    Shao, H.; Huang, Y.; Kolditz, O.

    2015-12-01

    Multiphase flow problems are numerically difficult to solve, as it often contains nonlinear Phase transition phenomena A conventional technique is to introduce the complementarity constraints where fluid properties such as liquid saturations are confined within a physically reasonable range. Based on such constraints, the mathematical model can be reformulated into a system of nonlinear partial differential equations coupled with variational inequalities. They can be then numerically handled by optimization algorithms. In this work, two different approaches utilizing the complementarity constraints based on persistent primary variables formulation[4] are implemented and investigated. The first approach proposed by Marchand et.al[1] is using "local complementary constraints", i.e. coupling the constraints with the local constitutive equations. The second approach[2],[3] , namely the "global complementary constrains", applies the constraints globally with the mass conservation equation. We will discuss how these two approaches are applied to solve non-isothermal componential multiphase flow problem with the phase change phenomenon. Several benchmarks will be presented for investigating the overall numerical performance of different approaches. The advantages and disadvantages of different models will also be concluded. References[1] E.Marchand, T.Mueller and P.Knabner. Fully coupled generalized hybrid-mixed finite element approximation of two-phase two-component flow in porous media. Part I: formulation and properties of the mathematical model, Computational Geosciences 17(2): 431-442, (2013). [2] A. Lauser, C. Hager, R. Helmig, B. Wohlmuth. A new approach for phase transitions in miscible multi-phase flow in porous media. Water Resour., 34,(2011), 957-966. [3] J. Jaffré, and A. Sboui. Henry's Law and Gas Phase Disappearance. Transp. Porous Media. 82, (2010), 521-526. [4] A. Bourgeat, M. Jurak and F. Smaï. Two-phase partially miscible flow and transport modeling in

  6. ECOMOD - An ecological approach to radioecological modelling

    International Nuclear Information System (INIS)

    Sazykina, Tatiana G.

    2000-01-01

    A unified methodology is proposed to simulate the dynamic processes of radionuclide migration in aquatic food chains in parallel with their stable analogue elements. The distinguishing feature of the unified radioecological/ecological approach is the description of radionuclide migration along with dynamic equations for the ecosystem. The ability of the methodology to predict the results of radioecological experiments is demonstrated by an example of radionuclide (iron group) accumulation by a laboratory culture of the algae Platymonas viridis. Based on the unified methodology, the 'ECOMOD' radioecological model was developed to simulate dynamic radioecological processes in aquatic ecosystems. It comprises three basic modules, which are operated as a set of inter-related programs. The 'ECOSYSTEM' module solves non-linear ecological equations, describing the biomass dynamics of essential ecosystem components. The 'RADIONUCLIDE DISTRIBUTION' module calculates the radionuclide distribution in abiotic and biotic components of the aquatic ecosystem. The 'DOSE ASSESSMENT' module calculates doses to aquatic biota and doses to man from aquatic food chains. The application of the ECOMOD model to reconstruct the radionuclide distribution in the Chernobyl Cooling Pond ecosystem in the early period after the accident shows good agreement with observations

  7. The development of a healing model of care for an Indigenous drug and alcohol residential rehabilitation service: a community-based participatory research approach.

    Science.gov (United States)

    Munro, Alice; Shakeshaft, Anthony; Clifford, Anton

    2017-12-04

    Given the well-established evidence of disproportionately high rates of substance-related morbidity and mortality after release from incarceration for Indigenous Australians, access to comprehensive, effective and culturally safe residential rehabilitation treatment will likely assist in reducing recidivism to both prison and substance dependence for this population. In the absence of methodologically rigorous evidence, the delivery of Indigenous drug and alcohol residential rehabilitation services vary widely, and divergent views exist regarding the appropriateness and efficacy of different potential treatment components. One way to increase the methodological quality of evaluations of Indigenous residential rehabilitation services is to develop partnerships with researchers to better align models of care with the client's, and the community's, needs. An emerging research paradigm to guide the development of high quality evidence through a number of sequential steps that equitably involves services, stakeholders and researchers is community-based participatory research (CBPR). The purpose of this study is to articulate an Indigenous drug and alcohol residential rehabilitation service model of care, developed in collaboration between clients, service providers and researchers using a CBPR approach. This research adopted a mixed methods CBPR approach to triangulate collected data to inform the development of a model of care for a remote Indigenous drug and alcohol residential rehabilitation service. Four iterative CBPR steps of research activity were recorded during the 3-year research partnership. As a direct outcome of the CBPR framework, the service and researchers co-designed a Healing Model of Care that comprises six core treatment components, three core organisational components and is articulated in two program logics. The program logics were designed to specifically align each component and outcome with the mechanism of change for the client or organisation

  8. A new multi-objective optimization model for preventive maintenance and replacement scheduling of multi-component systems

    Science.gov (United States)

    Moghaddam, Kamran S.; Usher, John S.

    2011-07-01

    In this article, a new multi-objective optimization model is developed to determine the optimal preventive maintenance and replacement schedules in a repairable and maintainable multi-component system. In this model, the planning horizon is divided into discrete and equally-sized periods in which three possible actions must be planned for each component, namely maintenance, replacement, or do nothing. The objective is to determine a plan of actions for each component in the system while minimizing the total cost and maximizing overall system reliability simultaneously over the planning horizon. Because of the complexity, combinatorial and highly nonlinear structure of the mathematical model, two metaheuristic solution methods, generational genetic algorithm, and a simulated annealing are applied to tackle the problem. The Pareto optimal solutions that provide good tradeoffs between the total cost and the overall reliability of the system can be obtained by the solution approach. Such a modeling approach should be useful for maintenance planners and engineers tasked with the problem of developing recommended maintenance plans for complex systems of components.

  9. Trajectory modeling of gestational weight: A functional principal component analysis approach.

    Directory of Open Access Journals (Sweden)

    Menglu Che

    Full Text Available Suboptimal gestational weight gain (GWG, which is linked to increased risk of adverse outcomes for a pregnant woman and her infant, is prevalent. In the study of a large cohort of Canadian pregnant women, our goals are to estimate the individual weight growth trajectory using sparsely collected bodyweight data, and to identify the factors affecting the weight change during pregnancy, such as prepregnancy body mass index (BMI, dietary intakes and physical activity. The first goal was achieved through functional principal component analysis (FPCA by conditional expectation. For the second goal, we used linear regression with the total weight gain as the response variable. The trajectory modeling through FPCA had a significantly smaller root mean square error (RMSE and improved adaptability than the classic nonlinear mixed-effect models, demonstrating a novel tool that can be used to facilitate real time monitoring and interventions of GWG. Our regression analysis showed that prepregnancy BMI had a high predictive value for the weight changes during pregnancy, which agrees with the published weight gain guideline.

  10. Applying of component system development in object methodology

    Directory of Open Access Journals (Sweden)

    Milan Mišovič

    2013-01-01

    software system and referred to as software alliance.In both of these mentioned publications there is delivered ​​deep philosophy of relevant issues relating to SWC / SWA as creating copies of components (cloning, the establishment and destruction of components at software run-time (dynamic reconfiguration, cooperation of autonomous components, programmable management of components interface in depending on internal components functionality and customer requirements (functionality, security, versioning.Nevertheless, even today we can meet numerous cases of SWC / SWA existence, with a highly developed architecture that is accepting vast majority of these requests. On the other hand, in the development practice of component-based systems with a dynamic architecture (i.e. architecture with dynamic reconfiguration, and finally with a mobile architecture (i.e. architecture with dynamic component mobility confirms the inadequacy of the design methods contained in UML 2.0. It proves especially the dissertation thesis (Rych, Weis, 2008. Software Engineering currently has two different approaches to systems SWC / SWA. The first approach is known as component-oriented software development CBD (Component based Development. According to (Szyper, 2002 that is a collection of CBD methodologies that are heavily focused on the setting up and software components re-usability within the architecture. Although CBD does not show high theoretical approach, nevertheless, it is classified under the general evolution of SDP (Software Development Process, see (Sommer, 2010 as one of its two dominant directions.From a structural point of view, a software system consists of self-contained, interoperable architectural units – components based on well-defined interfaces. Classical procedural object-oriented methodologies significantly do not use the component meta-models, based on which the target component systems are formed, then. Component meta-models describe the syntax, semantics of

  11. Modelling and simulation of electrical energy systems through a complex systems approach using agent-based models

    Energy Technology Data Exchange (ETDEWEB)

    Kremers, Enrique

    2013-10-01

    Complexity science aims to better understand the processes of both natural and man-made systems which are composed of many interacting entities at different scales. A disaggregated approach is proposed for simulating electricity systems, by using agent-based models coupled to continuous ones. The approach can help in acquiring a better understanding of the operation of the system itself, e.g. on emergent phenomena or scale effects; as well as in the improvement and design of future smart grids.

  12. Building spatio-temporal database model based on ontological approach using relational database environment

    International Nuclear Information System (INIS)

    Mahmood, N.; Burney, S.M.A.

    2017-01-01

    Everything in this world is encapsulated by space and time fence. Our daily life activities are utterly linked and related with other objects in vicinity. Therefore, a strong relationship exist with our current location, time (including past, present and future) and event through with we are moving as an object also affect our activities in life. Ontology development and its integration with database are vital for the true understanding of the complex systems involving both spatial and temporal dimensions. In this paper we propose a conceptual framework for building spatio-temporal database model based on ontological approach. We have used relational data model for modelling spatio-temporal data content and present our methodology with spatio-temporal ontological accepts and its transformation into spatio-temporal database model. We illustrate the implementation of our conceptual model through a case study related to cultivated land parcel used for agriculture to exhibit the spatio-temporal behaviour of agricultural land and related entities. Moreover, it provides a generic approach for designing spatiotemporal databases based on ontology. The proposed model is capable to understand the ontological and somehow epistemological commitments and to build spatio-temporal ontology and transform it into a spatio-temporal data model. Finally, we highlight the existing and future research challenges. (author)

  13. Proceedings International Workshop on Formal Engineering approaches to Software Components and Architectures

    OpenAIRE

    Kofroň, Jan; Tumova, Jana

    2017-01-01

    These are the proceedings of the 14th International Workshop on Formal Engineering approaches to Software Components and Architectures (FESCA). The workshop was held on April 22, 2017 in Uppsala (Sweden) as a satellite event to the European Joint Conference on Theory and Practice of Software (ETAPS'17). The aim of the FESCA workshop is to bring together junior researchers from formal methods, software engineering, and industry interested in the development and application of formal modelling ...

  14. Model-Based Reconstructive Elasticity Imaging Using Ultrasound

    Directory of Open Access Journals (Sweden)

    Salavat R. Aglyamov

    2007-01-01

    Full Text Available Elasticity imaging is a reconstructive imaging technique where tissue motion in response to mechanical excitation is measured using modern imaging systems, and the estimated displacements are then used to reconstruct the spatial distribution of Young's modulus. Here we present an ultrasound elasticity imaging method that utilizes the model-based technique for Young's modulus reconstruction. Based on the geometry of the imaged object, only one axial component of the strain tensor is used. The numerical implementation of the method is highly efficient because the reconstruction is based on an analytic solution of the forward elastic problem. The model-based approach is illustrated using two potential clinical applications: differentiation of liver hemangioma and staging of deep venous thrombosis. Overall, these studies demonstrate that model-based reconstructive elasticity imaging can be used in applications where the geometry of the object and the surrounding tissue is somewhat known and certain assumptions about the pathology can be made.

  15. A Graph-Based Approach for 3D Building Model Reconstruction from Airborne LiDAR Point Clouds

    Directory of Open Access Journals (Sweden)

    Bin Wu

    2017-01-01

    Full Text Available 3D building model reconstruction is of great importance for environmental and urban applications. Airborne light detection and ranging (LiDAR is a very useful data source for acquiring detailed geometric and topological information of building objects. In this study, we employed a graph-based method based on hierarchical structure analysis of building contours derived from LiDAR data to reconstruct urban building models. The proposed approach first uses a graph theory-based localized contour tree method to represent the topological structure of buildings, then separates the buildings into different parts by analyzing their topological relationships, and finally reconstructs the building model by integrating all the individual models established through the bipartite graph matching process. Our approach provides a more complete topological and geometrical description of building contours than existing approaches. We evaluated the proposed method by applying it to the Lujiazui region in Shanghai, China, a complex and large urban scene with various types of buildings. The results revealed that complex buildings could be reconstructed successfully with a mean modeling error of 0.32 m. Our proposed method offers a promising solution for 3D building model reconstruction from airborne LiDAR point clouds.

  16. A Model-free Approach to Fault Detection of Continuous-time Systems Based on Time Domain Data

    Institute of Scientific and Technical Information of China (English)

    Ping Zhang; Steven X. Ding

    2007-01-01

    In this paper, a model-free approach is presented to design an observer-based fault detection system of linear continuoustime systems based on input and output data in the time domain. The core of the approach is to directly identify parameters of the observer-based residual generator based on a numerically reliable data equation obtained by filtering and sampling the input and output signals.

  17. Economic-based design of engineering systems with degrading components using probabilistic loss of quality

    International Nuclear Information System (INIS)

    Son, Young Kap; Savage, Gordon J.; Chang, Seog Weon

    2007-01-01

    The allocation of means and tolerances to provide quality, functional reliability and performance reliability in engineering systems is a challenging problem. Traditional measures to help select the best means and tolerances include mean time to failure and its variance: however, they have some shortcomings. In this paper, a monetary measure based on present worth is invoked as a more inclusive metric. We consider the sum of the production cost and the expected loss of quality cost over a planned horizon at the customer's discount rates. Key to the approach is a probabilistic loss of quality cost that incorporates the cumulative distribution function that arises from time-variant distributions of system performance measures due to degrading components. The proposed design approach investigates both degradation and uncertainty in component. Moreover, it tries to obviate problems of current Taguchi's loss function-based design approaches. Case studies show the practicality and promise of the approach

  18. e-Government Maturity Model Based on Systematic Review and Meta-Ethnography Approach

    Directory of Open Access Journals (Sweden)

    Darmawan Napitupulu

    2016-11-01

    Full Text Available Maturity model based on e-Government portal has been developed by a number of researchers both individually and institutionally, but still scattered in various journals and conference articles and can be said to have a different focus with each other, both in terms of stages and features. The aim of this research is conducting a study to integrate a number of maturity models existing today in order to build generic maturity model based on e-Government portal. The method used in this study is Systematic Review with meta-ethnography qualitative approach. Meta-ethnography, which is part of Systematic Review method, is a technique to perform data integration to obtain theories and concepts with a new level of understanding that is deeper and thorough. The result obtained is a maturity model based on e-Government portal that consists of 7 (seven stages, namely web presence, interaction, transaction, vertical integration, horizontal integration, full integration, and open participation. These seven stages are synthesized from the 111 key concepts related to 25 studies of maturity model based e-Government portal. The maturity model resulted is more comprehensive and generic because it is an integration of models (best practices that exists today.

  19. A Bayesian network based approach for integration of condition-based maintenance in strategic offshore wind farm O&M simulation models

    DEFF Research Database (Denmark)

    Nielsen, Jannie Sønderkær; Sørensen, John Dalsgaard; Sperstad, Iver Bakken

    2018-01-01

    In the overall decision problem regarding optimization of operation and maintenance (O&M) for offshore wind farms, there are many approaches for solving parts of the overall decision problem. Simulation-based strategy models accurately capture system effects related to logistics, but model...... to generate failures and CBM tasks. An example considering CBM for wind turbine blades demonstrates the feasibility of the approach....

  20. The development of component-based information systems

    CERN Document Server

    Cesare, Sergio de; Macredie, Robert

    2015-01-01

    This work provides a comprehensive overview of research and practical issues relating to component-based development information systems (CBIS). Spanning the organizational, developmental, and technical aspects of the subject, the original research included here provides fresh insights into successful CBIS technology and application. Part I covers component-based development methodologies and system architectures. Part II analyzes different aspects of managing component-based development. Part III investigates component-based development versus commercial off-the-shelf products (COTS), includi

  1. Ontologies to Support RFID-Based Link between Virtual Models and Construction Components

    DEFF Research Database (Denmark)

    Sørensen, Kristian Birch; Christiansson, Per; Svidt, Kjeld

    2010-01-01

    the virtual models and the physical components in the construction process can improve the information handling and sharing in construction and building operation management. Such a link can be created by means of Radio Frequency Identification (RFID) technology. Ontologies play an important role...

  2. Genetic programming based models in plant tissue culture: An addendum to traditional statistical approach.

    Science.gov (United States)

    Mridula, Meenu R; Nair, Ashalatha S; Kumar, K Satheesh

    2018-02-01

    In this paper, we compared the efficacy of observation based modeling approach using a genetic algorithm with the regular statistical analysis as an alternative methodology in plant research. Preliminary experimental data on in vitro rooting was taken for this study with an aim to understand the effect of charcoal and naphthalene acetic acid (NAA) on successful rooting and also to optimize the two variables for maximum result. Observation-based modelling, as well as traditional approach, could identify NAA as a critical factor in rooting of the plantlets under the experimental conditions employed. Symbolic regression analysis using the software deployed here optimised the treatments studied and was successful in identifying the complex non-linear interaction among the variables, with minimalistic preliminary data. The presence of charcoal in the culture medium has a significant impact on root generation by reducing basal callus mass formation. Such an approach is advantageous for establishing in vitro culture protocols as these models will have significant potential for saving time and expenditure in plant tissue culture laboratories, and it further reduces the need for specialised background.

  3. A Discussion on Uncertainty Representation and Interpretation in Model-Based Prognostics Algorithms based on Kalman Filter Estimation Applied to Prognostics of Electronics Components

    Science.gov (United States)

    Celaya, Jose R.; Saxen, Abhinav; Goebel, Kai

    2012-01-01

    This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process and how it relates to uncertainty representation, management, and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function and the true remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for the two while considering prognostics in making critical decisions.

  4. Computational neuroanatomy: ontology-based representation of neural components and connectivity.

    Science.gov (United States)

    Rubin, Daniel L; Talos, Ion-Florin; Halle, Michael; Musen, Mark A; Kikinis, Ron

    2009-02-05

    A critical challenge in neuroscience is organizing, managing, and accessing the explosion in neuroscientific knowledge, particularly anatomic knowledge. We believe that explicit knowledge-based approaches to make neuroscientific knowledge computationally accessible will be helpful in tackling this challenge and will enable a variety of applications exploiting this knowledge, such as surgical planning. We developed ontology-based models of neuroanatomy to enable symbolic lookup, logical inference and mathematical modeling of neural systems. We built a prototype model of the motor system that integrates descriptive anatomic and qualitative functional neuroanatomical knowledge. In addition to modeling normal neuroanatomy, our approach provides an explicit representation of abnormal neural connectivity in disease states, such as common movement disorders. The ontology-based representation encodes both structural and functional aspects of neuroanatomy. The ontology-based models can be evaluated computationally, enabling development of automated computer reasoning applications. Neuroanatomical knowledge can be represented in machine-accessible format using ontologies. Computational neuroanatomical approaches such as described in this work could become a key tool in translational informatics, leading to decision support applications that inform and guide surgical planning and personalized care for neurological disease in the future.

  5. Experiment planning using high-level component models at W7-X

    International Nuclear Information System (INIS)

    Lewerentz, Marc; Spring, Anett; Bluhm, Torsten; Heimann, Peter; Hennig, Christine; Kühner, Georg; Kroiss, Hugo; Krom, Johannes G.; Laqua, Heike; Maier, Josef; Riemann, Heike; Schacht, Jörg; Werner, Andreas; Zilker, Manfred

    2012-01-01

    Highlights: ► Introduction of models for an abstract description of fusion experiments. ► Component models support creating feasible experiment programs at planning time. ► Component models contain knowledge about physical and technical constraints. ► Generated views on models allow to present crucial information. - Abstract: The superconducting stellarator Wendelstein 7-X (W7-X) is a fusion device, which is capable of steady state operation. Furthermore W7-X is a very complex technical system. To cope with these requirements a modular and strongly hierarchical component-based control and data acquisition system has been designed. The behavior of W7-X is characterized by thousands of technical parameters of the participating components. The intended sequential change of those parameters during an experiment is defined in an experiment program. Planning such an experiment program is a crucial and complex task. To reduce the complexity an abstract, more physics-oriented high-level layer has been introduced earlier. The so-called high-level (physics) parameters are used to encapsulate technical details. This contribution will focus on the extension of this layer to a high-level component model. It completely describes the behavior of a component for a certain period of time. It allows not only defining simple value ranges but also complex dependencies between physics parameters. This can be: dependencies within components, dependencies between components or temporal dependencies. Component models can now be analyzed to generate various views of an experiment. A first implementation of such an analyze process is already finished. A graphical preview of a planned discharge can be generated from a chronological sequence of component models. This allows physicists to survey complex planned experiment programs at a glance.

  6. Model-Based Systems Engineering Approach to Managing Mass Margin

    Science.gov (United States)

    Chung, Seung H.; Bayer, Todd J.; Cole, Bjorn; Cooke, Brian; Dekens, Frank; Delp, Christopher; Lam, Doris

    2012-01-01

    When designing a flight system from concept through implementation, one of the fundamental systems engineering tasks ismanaging the mass margin and a mass equipment list (MEL) of the flight system. While generating a MEL and computing a mass margin is conceptually a trivial task, maintaining consistent and correct MELs and mass margins can be challenging due to the current practices of maintaining duplicate information in various forms, such as diagrams and tables, and in various media, such as files and emails. We have overcome this challenge through a model-based systems engineering (MBSE) approach within which we allow only a single-source-of-truth. In this paper we describe the modeling patternsused to capture the single-source-of-truth and the views that have been developed for the Europa Habitability Mission (EHM) project, a mission concept study, at the Jet Propulsion Laboratory (JPL).

  7. Towards a resource-based habitat approach for spatial modelling of vector-borne disease risks.

    Science.gov (United States)

    Hartemink, Nienke; Vanwambeke, Sophie O; Purse, Bethan V; Gilbert, Marius; Van Dyck, Hans

    2015-11-01

    Given the veterinary and public health impact of vector-borne diseases, there is a clear need to assess the suitability of landscapes for the emergence and spread of these diseases. Current approaches for predicting disease risks neglect key features of the landscape as components of the functional habitat of vectors or hosts, and hence of the pathogen. Empirical-statistical methods do not explicitly incorporate biological mechanisms, whereas current mechanistic models are rarely spatially explicit; both methods ignore the way animals use the landscape (i.e. movement ecology). We argue that applying a functional concept for habitat, i.e. the resource-based habitat concept (RBHC), can solve these issues. The RBHC offers a framework to identify systematically the different ecological resources that are necessary for the completion of the transmission cycle and to relate these resources to (combinations of) landscape features and other environmental factors. The potential of the RBHC as a framework for identifying suitable habitats for vector-borne pathogens is explored and illustrated with the case of bluetongue virus, a midge-transmitted virus affecting ruminants. The concept facilitates the study of functional habitats of the interacting species (vectors as well as hosts) and provides new insight into spatial and temporal variation in transmission opportunities and exposure that ultimately determine disease risks. It may help to identify knowledge gaps and control options arising from changes in the spatial configuration of key resources across the landscape. The RBHC framework may act as a bridge between existing mechanistic and statistical modelling approaches. © 2014 The Authors. Biological Reviews published by John Wiley & Sons Ltd on behalf of Cambridge Philosophical Society.

  8. The Effect of Task Duration on Event-Based Prospective Memory: A Multinomial Modeling Approach

    Directory of Open Access Journals (Sweden)

    Hongxia Zhang

    2017-11-01

    Full Text Available Remembering to perform an action when a specific event occurs is referred to as Event-Based Prospective Memory (EBPM. This study investigated how EBPM performance is affected by task duration by having university students (n = 223 perform an EBPM task that was embedded within an ongoing computer-based color-matching task. For this experiment, we separated the overall task’s duration into the filler task duration and the ongoing task duration. The filler task duration is the length of time between the intention and the beginning of the ongoing task, and the ongoing task duration is the length of time between the beginning of the ongoing task and the appearance of the first Prospective Memory (PM cue. The filler task duration and ongoing task duration were further divided into three levels: 3, 6, and 9 min. Two factors were then orthogonally manipulated between-subjects using a multinomial processing tree model to separate the effects of different task durations on the two EBPM components. A mediation model was then created to verify whether task duration influences EBPM via self-reminding or discrimination. The results reveal three points. (1 Lengthening the duration of ongoing tasks had a negative effect on EBPM performance while lengthening the duration of the filler task had no significant effect on it. (2 As the filler task was lengthened, both the prospective and retrospective components show a decreasing and then increasing trend. Also, when the ongoing task duration was lengthened, the prospective component decreased while the retrospective component significantly increased. (3 The mediating effect of discrimination between the task duration and EBPM performance was significant. We concluded that different task durations influence EBPM performance through different components with discrimination being the mediator between task duration and EBPM performance.

  9. Towards a traceable clinical guidelines application. A model-driven approach.

    Science.gov (United States)

    Domínguez, E; Pérez, B; Zapata, M

    2010-01-01

    The goal of this research is to provide an overall framework to enable model-based development of clinical guideline-based decision support systems (GBDSSs). The automatically generated GBDSSs are aimed at providing guided support to the physician during the application of guidelines and automatically storing guideline application data for traceability purposes. The development process of a GBDSS for a guideline is based on model-driven development (MDD) techniques which allow us to carry out such a process automatically, making development more agile and saving on human resource costs. We use UML Statecharts to represent the dynamics of guidelines and, based on this model, we use a MDD-based tool chain to generate the guideline-dependent components of each GBDSS in an automatic way. In particular, as for the traceability capabilities of each GBDSS, MDD techniques are combined with database schema mappings for metadata management in order to automatically generate the GBDSS-persistent component as one of the main contributions of this paper. The complete framework has been implemented as an Eclipse plug-in named GBDSSGenerator which, starting from the statechart representing a guideline, allows the development process to be carried out automatically by only selecting different menu options the plug-in provides. We have successfully validated our overall approach by generating the GBDSS for different types of clinical guidelines, even for laboratory guidelines. The proposed framework allows the development of clinical guideline-based decision support systems in an automatic way making this process more agile and saving on human resource costs.

  10. A component-based system for agricultural drought monitoring by remote sensing.

    Directory of Open Access Journals (Sweden)

    Heng Dong

    Full Text Available In recent decades, various kinds of remote sensing-based drought indexes have been proposed and widely used in the field of drought monitoring. However, the drought-related software and platform development lag behind the theoretical research. The current drought monitoring systems focus mainly on information management and publishing, and cannot implement professional drought monitoring or parameter inversion modelling, especially the models based on multi-dimensional feature space. In view of the above problems, this paper aims at fixing this gap with a component-based system named RSDMS to facilitate the application of drought monitoring by remote sensing. The system is designed and developed based on Component Object Model (COM to ensure the flexibility and extendibility of modules. RSDMS realizes general image-related functions such as data management, image display, spatial reference management, image processing and analysis, and further provides drought monitoring and evaluation functions based on internal and external models. Finally, China's Ningxia region is selected as the study area to validate the performance of RSDMS. The experimental results show that RSDMS provide an efficient and scalable support to agricultural drought monitoring.

  11. Creep fatigue design of FBR components

    International Nuclear Information System (INIS)

    Bhoje, S.B.; Chellapandi, P.

    1997-01-01

    This paper deals with the characteristic features of Fast Breeder Reactor (FBR) with reference to creep fatigue, current creep fatigue design approach in compliance with RCCMR (1987) design code, material data, effects of weldments and neutron irradiation, material constitutive models employed, structural analysis and further R and D required for achieving maturity in creep fatigue design of FBR components. For the analysis reported in this paper, material constitutive models developed based on ORNIb (Oak Ridge National Laboratory) and Chaboche viscoplastic theories are employed to demonstrate the potential of FBR components for higher plant temperatures and/or longer life. The results are presented for the studies carried out towards life prediction of Prototype Fast Breeder Reactor (PFBR) components. (author). 24 refs, 8 figs, 5 tabs

  12. Machine learning of frustrated classical spin models. I. Principal component analysis

    Science.gov (United States)

    Wang, Ce; Zhai, Hui

    2017-10-01

    This work aims at determining whether artificial intelligence can recognize a phase transition without prior human knowledge. If this were successful, it could be applied to, for instance, analyzing data from the quantum simulation of unsolved physical models. Toward this goal, we first need to apply the machine learning algorithm to well-understood models and see whether the outputs are consistent with our prior knowledge, which serves as the benchmark for this approach. In this work, we feed the computer data generated by the classical Monte Carlo simulation for the X Y model in frustrated triangular and union jack lattices, which has two order parameters and exhibits two phase transitions. We show that the outputs of the principal component analysis agree very well with our understanding of different orders in different phases, and the temperature dependences of the major components detect the nature and the locations of the phase transitions. Our work offers promise for using machine learning techniques to study sophisticated statistical models, and our results can be further improved by using principal component analysis with kernel tricks and the neural network method.

  13. Map-Based Channel Model for Urban Macrocell Propagation Scenarios

    Directory of Open Access Journals (Sweden)

    Jose F. Monserrat

    2015-01-01

    Full Text Available The evolution of LTE towards 5G has started and different research projects and institutions are in the process of verifying new technology components through simulations. Coordination between groups is strongly recommended and, in this sense, a common definition of test cases and simulation models is needed. The scope of this paper is to present a realistic channel model for urban macrocell scenarios. This model is map-based and takes into account the layout of buildings situated in the area under study. A detailed description of the model is given together with a comparison with other widely used channel models. The benchmark includes a measurement campaign in which the proposed model is shown to be much closer to the actual behavior of a cellular system. Particular attention is given to the outdoor component of the model, since it is here where the proposed approach is showing main difference with other previous models.

  14. Virtual enterprise model for the electronic components business in the Nuclear Weapons Complex

    Energy Technology Data Exchange (ETDEWEB)

    Ferguson, T.J.; Long, K.S.; Sayre, J.A. [Sandia National Labs., Albuquerque, NM (United States); Hull, A.L. [Sandia National Labs., Livermore, CA (United States); Carey, D.A.; Sim, J.R.; Smith, M.G. [Allied-Signal Aerospace Co., Kansas City, MO (United States). Kansas City Div.

    1994-08-01

    The electronic components business within the Nuclear Weapons Complex spans organizational and Department of Energy contractor boundaries. An assessment of the current processes indicates a need for fundamentally changing the way electronic components are developed, procured, and manufactured. A model is provided based on a virtual enterprise that recognizes distinctive competencies within the Nuclear Weapons Complex and at the vendors. The model incorporates changes that reduce component delivery cycle time and improve cost effectiveness while delivering components of the appropriate quality.

  15. The n-component cubic model and flows: subgraph break-collapse method

    International Nuclear Information System (INIS)

    Essam, J.W.; Magalhaes, A.C.N. de.

    1988-01-01

    We generalise to the n-component cubic model the subgraph break-collapse method which we previously developed for the Potts model. The relations used are based on expressions which we recently derived for the Z(λ) model in terms of mod-λ flows. Our recursive algorithm is similar, for n = 2, to the break-collapse method for the Z(4) model proposed by Mariz and coworkers. It allows the exact calculation for the partition function and correlation functions for n-component cubic clusters with n as a variable, without the need to examine all of the spin configurations. (author) [pt

  16. Design of aseismic class components: measurement of frequency parameters and optimization of analytical models

    International Nuclear Information System (INIS)

    Panet, M.; Delmas, J.; Ballester, J.L.

    1993-04-01

    In each plant unit, there are about 250 earthquake-qualified safety related valves. Justifying their aseismic capacity has proved complex. The structures are so diversified that it is not easy for designers to determine a generic model. Generally speaking, the models tend to overestimate the resonance frequencies. An approach more representative of the actual structure of the component was consequently sought, on which qualification of technological options with respect to the safety authorities would be based, thereby optimizing vibrating table qualification test schedules. The paper describes application of the approximate spectral identification method from the OPTDIM system, which determines basic structure modal data to forecast the approximate eigenfrequencies of a sub-domain, materialized by the component. It is used for a posteriori justification of topworks in operating equipment (900 MWe series), with respect to the 33 Hz ≤ f condition, which guarantees zero amplification of seismic induced internal loads. In the seismic design context and supplementing the preliminary eigenfrequency studies, inverse method solution techniques are used to define the most representative model of the modal behaviour of an electrically controlled motor-operated valve. (authors). 6 figs., 6 tabs., 11 refs

  17. Components of Effective Cognitive-Behavioral Therapy for Pediatric Headache: A Mixed Methods Approach.

    Science.gov (United States)

    Law, Emily F; Beals-Erickson, Sarah E; Fisher, Emma; Lang, Emily A; Palermo, Tonya M

    2017-01-01

    Internet-delivered treatment has the potential to expand access to evidence-based cognitive-behavioral therapy (CBT) for pediatric headache, and has demonstrated efficacy in small trials for some youth with headache. We used a mixed methods approach to identify effective components of CBT for this population. In Study 1, component profile analysis identified common interventions delivered in published RCTs of effective CBT protocols for pediatric headache delivered face-to-face or via the Internet. We identified a core set of three treatment components that were common across face-to-face and Internet protocols: 1) headache education, 2) relaxation training, and 3) cognitive interventions. Biofeedback was identified as an additional core treatment component delivered in face-to-face protocols only. In Study 2, we conducted qualitative interviews to describe the perspectives of youth with headache and their parents on successful components of an Internet CBT intervention. Eleven themes emerged from the qualitative data analysis, which broadly focused on patient experiences using the treatment components and suggestions for new treatment components. In the Discussion, these mixed methods findings are integrated to inform the adaptation of an Internet CBT protocol for youth with headache.

  18. Rule-based modeling: a computational approach for studying biomolecular site dynamics in cell signaling systems

    Science.gov (United States)

    Chylek, Lily A.; Harris, Leonard A.; Tung, Chang-Shung; Faeder, James R.; Lopez, Carlos F.

    2013-01-01

    Rule-based modeling was developed to address the limitations of traditional approaches for modeling chemical kinetics in cell signaling systems. These systems consist of multiple interacting biomolecules (e.g., proteins), which themselves consist of multiple parts (e.g., domains, linear motifs, and sites of phosphorylation). Consequently, biomolecules that mediate information processing generally have the potential to interact in multiple ways, with the number of possible complexes and post-translational modification states tending to grow exponentially with the number of binary interactions considered. As a result, only large reaction networks capture all possible consequences of the molecular interactions that occur in a cell signaling system, which is problematic because traditional modeling approaches for chemical kinetics (e.g., ordinary differential equations) require explicit network specification. This problem is circumvented through representation of interactions in terms of local rules. With this approach, network specification is implicit and model specification is concise. Concise representation results in a coarse graining of chemical kinetics, which is introduced because all reactions implied by a rule inherit the rate law associated with that rule. Coarse graining can be appropriate if interactions are modular, and the coarseness of a model can be adjusted as needed. Rules can be specified using specialized model-specification languages, and recently developed tools designed for specification of rule-based models allow one to leverage powerful software engineering capabilities. A rule-based model comprises a set of rules, which can be processed by general-purpose simulation and analysis tools to achieve different objectives (e.g., to perform either a deterministic or stochastic simulation). PMID:24123887

  19. Rule-based modeling: a computational approach for studying biomolecular site dynamics in cell signaling systems.

    Science.gov (United States)

    Chylek, Lily A; Harris, Leonard A; Tung, Chang-Shung; Faeder, James R; Lopez, Carlos F; Hlavacek, William S

    2014-01-01

    Rule-based modeling was developed to address the limitations of traditional approaches for modeling chemical kinetics in cell signaling systems. These systems consist of multiple interacting biomolecules (e.g., proteins), which themselves consist of multiple parts (e.g., domains, linear motifs, and sites of phosphorylation). Consequently, biomolecules that mediate information processing generally have the potential to interact in multiple ways, with the number of possible complexes and posttranslational modification states tending to grow exponentially with the number of binary interactions considered. As a result, only large reaction networks capture all possible consequences of the molecular interactions that occur in a cell signaling system, which is problematic because traditional modeling approaches for chemical kinetics (e.g., ordinary differential equations) require explicit network specification. This problem is circumvented through representation of interactions in terms of local rules. With this approach, network specification is implicit and model specification is concise. Concise representation results in a coarse graining of chemical kinetics, which is introduced because all reactions implied by a rule inherit the rate law associated with that rule. Coarse graining can be appropriate if interactions are modular, and the coarseness of a model can be adjusted as needed. Rules can be specified using specialized model-specification languages, and recently developed tools designed for specification of rule-based models allow one to leverage powerful software engineering capabilities. A rule-based model comprises a set of rules, which can be processed by general-purpose simulation and analysis tools to achieve different objectives (e.g., to perform either a deterministic or stochastic simulation). © 2013 Wiley Periodicals, Inc.

  20. An individual-based modelling approach to estimate landscape connectivity for bighorn sheep (Ovis canadensis

    Directory of Open Access Journals (Sweden)

    Corrie H. Allen

    2016-05-01

    Full Text Available Background. Preserving connectivity, or the ability of a landscape to support species movement, is among the most commonly recommended strategies to reduce the negative effects of climate change and human land use development on species. Connectivity analyses have traditionally used a corridor-based approach and rely heavily on least cost path modeling and circuit theory to delineate corridors. Individual-based models are gaining popularity as a potentially more ecologically realistic method of estimating landscape connectivity. However, this remains a relatively unexplored approach. We sought to explore the utility of a simple, individual-based model as a land-use management support tool in identifying and implementing landscape connectivity. Methods. We created an individual-based model of bighorn sheep (Ovis canadensis that simulates a bighorn sheep traversing a landscape by following simple movement rules. The model was calibrated for bighorn sheep in the Okanagan Valley, British Columbia, Canada, a region containing isolated herds that are vital to conservation of the species in its northern range. Simulations were run to determine baseline connectivity between subpopulations in the study area. We then applied the model to explore two land management scenarios on simulated connectivity: restoring natural fire regimes and identifying appropriate sites for interventions that would increase road permeability for bighorn sheep. Results. This model suggests there are no continuous areas of good habitat between current subpopulations of sheep in the study area; however, a series of stepping-stones or circuitous routes could facilitate movement between subpopulations and into currently unoccupied, yet suitable, bighorn habitat. Restoring natural fire regimes or mimicking fire with prescribed burns and tree removal could considerably increase bighorn connectivity in this area. Moreover, several key road crossing sites that could benefit from

  1. A Modeling Approach for Plastic-Metal Laser Direct Joining

    Science.gov (United States)

    Lutey, Adrian H. A.; Fortunato, Alessandro; Ascari, Alessandro; Romoli, Luca

    2017-09-01

    Laser processing has been identified as a feasible approach to direct joining of metal and plastic components without the need for adhesives or mechanical fasteners. The present work sees development of a modeling approach for conduction and transmission laser direct joining of these materials based on multi-layer optical propagation theory and numerical heat flow simulation. The scope of this methodology is to predict process outcomes based on the calculated joint interface and upper surface temperatures. Three representative cases are considered for model verification, including conduction joining of PBT and aluminum alloy, transmission joining of optically transparent PET and stainless steel, and transmission joining of semi-transparent PA 66 and stainless steel. Conduction direct laser joining experiments are performed on black PBT and 6082 anticorodal aluminum alloy, achieving shear loads of over 2000 N with specimens of 2 mm thickness and 25 mm width. Comparison with simulation results shows that consistently high strength is achieved where the peak interface temperature is above the plastic degradation temperature. Comparison of transmission joining simulations and published experimental results confirms these findings and highlights the influence of plastic layer optical absorption on process feasibility.

  2. Evaluating Emulation-based Models of Distributed Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Stephen T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Cyber Initiatives; Gabert, Kasimir G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Cyber Initiatives; Tarman, Thomas D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Emulytics Initiatives

    2017-08-01

    Emulation-based models of distributed computing systems are collections of virtual ma- chines, virtual networks, and other emulation components configured to stand in for oper- ational systems when performing experimental science, training, analysis of design alterna- tives, test and evaluation, or idea generation. As with any tool, we should carefully evaluate whether our uses of emulation-based models are appropriate and justified. Otherwise, we run the risk of using a model incorrectly and creating meaningless results. The variety of uses of emulation-based models each have their own goals and deserve thoughtful evaluation. In this paper, we enumerate some of these uses and describe approaches that one can take to build an evidence-based case that a use of an emulation-based model is credible. Predictive uses of emulation-based models, where we expect a model to tell us something true about the real world, set the bar especially high and the principal evaluation method, called validation , is comensurately rigorous. We spend the majority of our time describing and demonstrating the validation of a simple predictive model using a well-established methodology inherited from decades of development in the compuational science and engineering community.

  3. Groundwater potentiality mapping using geoelectrical-based aquifer hydraulic parameters: A GIS-based multi-criteria decision analysis modeling approach

    Directory of Open Access Journals (Sweden)

    Kehinde Anthony Mogaji Hwee San Lim

    2017-01-01

    Full Text Available This study conducted a robust analysis on acquired 2D resistivity imaging data and borehole pumping test records to optimize groundwater potentiality mapping in Perak province, Malaysia using derived aquifer hydraulic properties. The transverse resistance (TR parameter was determined from the interpreted 2D resistivity imaging data by applying the Dar-Zarrouk parameter equation. Linear regression and GIS techniques were used to regress the estimated values for TR parameters with the aquifer transmissivity values extracted from the geospatially produced BPT records-based aquifer transmissivity map to develop the aquifer transmissivity parameter predictive (ATPP model. The reliability evaluated ATPP model using the Theil inequality coefficient measurement approach was used to establish geoelectrical-based hydraulic parameters (GHP modeling equations for the modeling of transmissivity (Tr, hydraulic conductivity (K, storativity (St, and hydraulic diffusivity (D properties. The applied GHP modeling equation results to the delineated aquifer media was used to produce aquifer potential conditioning factor maps for Tr, K, St, and D. The maps were modeled to develop an aquifer potential mapping index (APMI model via applying the multi-criteria decision analysis-analytic hierarchy process principle. The area groundwater reservoir productivity potential model map produced based on the processed APMI model estimates in the GIS environment was found to be 71% accurate. This study establishes a good alternative approach to determine aquifer hydraulic parameters even in areas where pumping test information is unavailable using a cost effective geophysical data. The produced map can be explored for hydrological decision making.

  4. Modelling the Load Curve of Aggregate Electricity Consumption Using Principal Components

    OpenAIRE

    Matteo Manera; Angelo Marzullo

    2003-01-01

    Since oil is a non-renewable resource with a high environmental impact, and its most common use is to produce combustibles for electricity, reliable methods for modelling electricity consumption can contribute to a more rational employment of this hydrocarbon fuel. In this paper we apply the Principal Components (PC) method to modelling the load curves of Italy, France and Greece on hourly data of aggregate electricity consumption. The empirical results obtained with the PC approach are compa...

  5. A cost-based empirical model of the aggregate price determination for the Turkish economy: A multivariate cointegration approach

    Directory of Open Access Journals (Sweden)

    Zeren Fatma

    2010-01-01

    Full Text Available This paper tries to examine the long run relationships between the aggregate consumer prices and some cost-based components for the Turkish economy. Based on a simple economic model of the macro-scaled price formation, multivariate cointegration techniques have been applied to test whether the real data support the a priori model construction. The results reveal that all of the factors, related to the price determination, have a positive impact on the consumer prices as expected. We find that the most significant component contributing to the price setting is the nominal exchange rate depreciation. We also cannot reject the linear homogeneity of the sum of all the price data as to the domestic inflation. The paper concludes that the Turkish consumer prices have in fact a strong cost-push component that contributes to the aggregate pricing.

  6. Application of a multi-component mean field model to the coarsening behaviour of a nickel-based superalloy

    International Nuclear Information System (INIS)

    Anderson, M.J.; Rowe, A.; Wells, J.; Basoalto, H.C.

    2016-01-01

    A multi-component mean field model has been applied to predict the particle evolution of the γ′ particles in the nickel based superalloy IN738LC, capturing the transition from an initial multimodal particle distribution towards a unimodal distribution. Experiments have been performed to measure the coarsening behaviour during isothermal heat treatments using quantitative analysis of micrographs. The three dimensional size of the γ′ particles has been approximated for use in simulation. A coupled thermodynamic/mean field modelling framework is presented and applied to describe the particle size evolution. A robust numerical implementation of the model is detailed that makes use of surrogate models to capture the thermodynamics. Different descriptions of the particle growth rate of non-dilute particle systems have been explored. A numerical investigation of the influence of scatter in chemical composition upon the particle size distribution evolution has been carried out. It is shown how the tolerance in chemical composition of a given alloy can impact particle coarsening behaviour. Such predictive capability is of interest in understanding variation in component performance and the refinement of chemical composition tolerances. It has been found that the inclusion of misfit strain within the current model formulation does not have a significant affect upon predicted long term particle coarsening behaviour. Model predictions show good agreement with experimental data. In particular, the model predicts a reduced growth rate of the mean particle size during the transition from bimodal to unimodal distributions.

  7. A CSP-Based Agent Modeling Framework for the Cougaar Agent-Based Architecture

    Science.gov (United States)

    Gracanin, Denis; Singh, H. Lally; Eltoweissy, Mohamed; Hinchey, Michael G.; Bohner, Shawn A.

    2005-01-01

    Cognitive Agent Architecture (Cougaar) is a Java-based architecture for large-scale distributed agent-based applications. A Cougaar agent is an autonomous software entity with behaviors that represent a real-world entity (e.g., a business process). A Cougaar-based Model Driven Architecture approach, currently under development, uses a description of system's functionality (requirements) to automatically implement the system in Cougaar. The Communicating Sequential Processes (CSP) formalism is used for the formal validation of the generated system. Two main agent components, a blackboard and a plugin, are modeled as CSP processes. A set of channels represents communications between the blackboard and individual plugins. The blackboard is represented as a CSP process that communicates with every agent in the collection. The developed CSP-based Cougaar modeling framework provides a starting point for a more complete formal verification of the automatically generated Cougaar code. Currently it is used to verify the behavior of an individual agent in terms of CSP properties and to analyze the corresponding Cougaar society.

  8. Data base formation for important components of reactor TRIGA MARK II

    International Nuclear Information System (INIS)

    Jordan, R.; Mavko, B.; Kozuh, M.

    1992-01-01

    The paper represents specific data base formation for reactor TRIGA MARK II in Podgorica. Reactor operation data from year 1985 to 1990 were collected. Two groups of collected data were formed. The first group includes components data and the second group covers data of reactor scrams. Time related and demand related models were used for data evaluation. Parameters were estimated by classical method. Similar data bases are useful everywhere where components unavailabilities may have severe drawback. (author) [sl

  9. Formal approach to modeling of modern Information Systems

    Directory of Open Access Journals (Sweden)

    Bálint Molnár

    2016-01-01

    Full Text Available Most recently, the concept of business documents has started to play double role. On one hand, a business document (word processing text or calculation sheet can be used as specification tool, on the other hand the business document is an immanent constituent of business processes, thereby essential component of business Information Systems. The recent tendency is that the majority of documents and their contents within business Information Systems remain in semi-structured format and a lesser part of documents is transformed into schemas of structured databases. In order to keep the emerging situation in hand, we suggest the creation (1 a theoretical framework for modeling business Information Systems; (2 and a design method for practical application based on the theoretical model that provides the structuring principles. The modeling approach that focuses on documents and their interrelationships with business processes assists in perceiving the activities of modern Information Systems.

  10. The common component architecture for particle accelerator simulations

    International Nuclear Information System (INIS)

    Dechow, D.R.; Norris, B.; Amundson, J.

    2007-01-01

    Synergia2 is a beam dynamics modeling and simulation application for high-energy accelerators such as the Tevatron at Fermilab and the International Linear Collider, which is now under planning and development. Synergia2 is a hybrid, multilanguage software package comprised of two separate accelerator physics packages (Synergia and MaryLie/Impact) and one high-performance computer science package (PETSc). We describe our approach to producing a set of beam dynamics-specific software components based on the Common Component Architecture specification. Among other topics, we describe particular experiences with the following tasks: using Python steering to guide the creation of interfaces and to prototype components; working with legacy Fortran codes; and an example component-based, beam dynamics simulation.

  11. A mesoscopic reaction rate model for shock initiation of multi-component PBX explosives.

    Science.gov (United States)

    Liu, Y R; Duan, Z P; Zhang, Z Y; Ou, Z C; Huang, F L

    2016-11-05

    The primary goal of this research is to develop a three-term mesoscopic reaction rate model that consists of a hot-spot ignition, a low-pressure slow burning and a high-pressure fast reaction terms for shock initiation of multi-component Plastic Bonded Explosives (PBX). Thereinto, based on the DZK hot-spot model for a single-component PBX explosive, the hot-spot ignition term as well as its reaction rate is obtained through a "mixing rule" of the explosive components; new expressions for both the low-pressure slow burning term and the high-pressure fast reaction term are also obtained by establishing the relationships between the reaction rate of the multi-component PBX explosive and that of its explosive components, based on the low-pressure slow burning term and the high-pressure fast reaction term of a mesoscopic reaction rate model. Furthermore, for verification, the new reaction rate model is incorporated into the DYNA2D code to simulate numerically the shock initiation process of the PBXC03 and the PBXC10 multi-component PBX explosives, and the numerical results of the pressure histories at different Lagrange locations in explosive are found to be in good agreements with previous experimental data. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. LEGOS: Object-based software components for mission-critical systems. Final report, June 1, 1995--December 31, 1997

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-08-01

    An estimated 85% of the installed base of software is a custom application with a production quantity of one. In practice, almost 100% of military software systems are custom software. Paradoxically, the marginal costs of producing additional units are near zero. So why hasn`t the software market, a market with high design costs and low productions costs evolved like other similar custom widget industries, such as automobiles and hardware chips? The military software industry seems immune to market pressures that have motivated a multilevel supply chain structure in other widget industries: design cost recovery, improve quality through specialization, and enable rapid assembly from purchased components. The primary goal of the ComponentWare Consortium (CWC) technology plan was to overcome barriers to building and deploying mission-critical information systems by using verified, reusable software components (Component Ware). The adoption of the ComponentWare infrastructure is predicated upon a critical mass of the leading platform vendors` inevitable adoption of adopting emerging, object-based, distributed computing frameworks--initially CORBA and COM/OLE. The long-range goal of this work is to build and deploy military systems from verified reusable architectures. The promise of component-based applications is to enable developers to snap together new applications by mixing and matching prefabricated software components. A key result of this effort is the concept of reusable software architectures. A second important contribution is the notion that a software architecture is something that can be captured in a formal language and reused across multiple applications. The formalization and reuse of software architectures provide major cost and schedule improvements. The Unified Modeling Language (UML) is fast becoming the industry standard for object-oriented analysis and design notation for object-based systems. However, the lack of a standard real-time distributed

  13. Integrating adaptive behaviour in large-scale flood risk assessments: an Agent-Based Modelling approach

    Science.gov (United States)

    Haer, Toon; Aerts, Jeroen

    2015-04-01

    Between 1998 and 2009, Europe suffered over 213 major damaging floods, causing 1126 deaths, displacing around half a million people. In this period, floods caused at least 52 billion euro in insured economic losses making floods the most costly natural hazard faced in Europe. In many low-lying areas, the main strategy to cope with floods is to reduce the risk of the hazard through flood defence structures, like dikes and levees. However, it is suggested that part of the responsibility for flood protection needs to shift to households and businesses in areas at risk, and that governments and insurers can effectively stimulate the implementation of individual protective measures. However, adaptive behaviour towards flood risk reduction and the interaction between the government, insurers, and individuals has hardly been studied in large-scale flood risk assessments. In this study, an European Agent-Based Model is developed including agent representatives for the administrative stakeholders of European Member states, insurers and reinsurers markets, and individuals following complex behaviour models. The Agent-Based Modelling approach allows for an in-depth analysis of the interaction between heterogeneous autonomous agents and the resulting (non-)adaptive behaviour. Existing flood damage models are part of the European Agent-Based Model to allow for a dynamic response of both the agents and the environment to changing flood risk and protective efforts. By following an Agent-Based Modelling approach this study is a first contribution to overcome the limitations of traditional large-scale flood risk models in which the influence of individual adaptive behaviour towards flood risk reduction is often lacking.

  14. A multi-objective constraint-based approach for modeling genome-scale microbial ecosystems.

    Science.gov (United States)

    Budinich, Marko; Bourdon, Jérémie; Larhlimi, Abdelhalim; Eveillard, Damien

    2017-01-01

    Interplay within microbial communities impacts ecosystems on several scales, and elucidation of the consequent effects is a difficult task in ecology. In particular, the integration of genome-scale data within quantitative models of microbial ecosystems remains elusive. This study advocates the use of constraint-based modeling to build predictive models from recent high-resolution -omics datasets. Following recent studies that have demonstrated the accuracy of constraint-based models (CBMs) for simulating single-strain metabolic networks, we sought to study microbial ecosystems as a combination of single-strain metabolic networks that exchange nutrients. This study presents two multi-objective extensions of CBMs for modeling communities: multi-objective flux balance analysis (MO-FBA) and multi-objective flux variability analysis (MO-FVA). Both methods were applied to a hot spring mat model ecosystem. As a result, multiple trade-offs between nutrients and growth rates, as well as thermodynamically favorable relative abundances at community level, were emphasized. We expect this approach to be used for integrating genomic information in microbial ecosystems. Following models will provide insights about behaviors (including diversity) that take place at the ecosystem scale.

  15. Image edge detection based tool condition monitoring with morphological component analysis.

    Science.gov (United States)

    Yu, Xiaolong; Lin, Xin; Dai, Yiquan; Zhu, Kunpeng

    2017-07-01

    The measurement and monitoring of tool condition are keys to the product precision in the automated manufacturing. To meet the need, this study proposes a novel tool wear monitoring approach based on the monitored image edge detection. Image edge detection has been a fundamental tool to obtain features of images. This approach extracts the tool edge with morphological component analysis. Through the decomposition of original tool wear image, the approach reduces the influence of texture and noise for edge measurement. Based on the target image sparse representation and edge detection, the approach could accurately extract the tool wear edge with continuous and complete contour, and is convenient in charactering tool conditions. Compared to the celebrated algorithms developed in the literature, this approach improves the integrity and connectivity of edges, and the results have shown that it achieves better geometry accuracy and lower error rate in the estimation of tool conditions. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Thermodynamically consistent modeling and simulation of multi-component two-phase flow model with partial miscibility

    KAUST Repository

    Kou, Jisheng; Sun, Shuyu

    2016-01-01

    A general diffuse interface model with a realistic equation of state (e.g. Peng-Robinson equation of state) is proposed to describe the multi-component two-phase fluid flow based on the principles of the NVT-based framework which is a latest

  17. Fast simulation approaches for power fluctuation model of wind farm based on frequency domain

    DEFF Research Database (Denmark)

    Lin, Jin; Gao, Wen-zhong; Sun, Yuan-zhang

    2012-01-01

    This paper discusses one model developed by Riso, DTU, which is capable of simulating the power fluctuation of large wind farms in frequency domain. In the original design, the “frequency-time” transformations are time-consuming and might limit the computation speed for a wind farm of large size....... Under this background, this paper proposes four efficient approaches to accelerate the simulation speed. Two of them are based on physical model simplifications, and the other two improve the numerical computation. The case study demonstrates the efficiency of these approaches. The acceleration ratio...... is more than 300 times if all these approaches are adopted, in any low, medium and high wind speed test scenarios....

  18. An approach for characterizing the distribution of shrubland ecosystem components as continuous fields as part of NLCD

    Science.gov (United States)

    Xian, George Z.; Homer, Collin G.; Meyer, Debbie; Granneman, Brian J.

    2013-01-01

    Characterizing and quantifying distributions of shrubland ecosystem components is one of the major challenges for monitoring shrubland vegetation cover change across the United States. A new approach has been developed to quantify shrubland components as fractional products within National Land Cover Database (NLCD). This approach uses remote sensing data and regression tree models to estimate the fractional cover of shrubland ecosystem components. The approach consists of three major steps: field data collection, high resolution estimates of shrubland ecosystem components using WorldView-2 imagery, and coarse resolution estimates of these components across larger areas using Landsat imagery. This research seeks to explore this method to quantify shrubland ecosystem components as continuous fields in regions that contain wide-ranging shrubland ecosystems. Fractional cover of four shrubland ecosystem components, including bare ground, herbaceous, litter, and shrub, as well as shrub heights, were delineated in three ecological regions in Arizona, Florida, and Texas. Results show that estimates for most components have relatively small normalized root mean square errors and significant correlations with validation data in both Arizona and Texas. The distribution patterns of shrub height also show relatively high accuracies in these two areas. The fractional cover estimates of shrubland components, except for litter, are not well represented in the Florida site. The research results suggest that this method provides good potential to effectively characterize shrubland ecosystem conditions over perennial shrubland although it is less effective in transitional shrubland. The fractional cover of shrub components as continuous elements could offer valuable information to quantify biomass and help improve thematic land cover classification in arid and semiarid areas.

  19. User Preference-Based Dual-Memory Neural Model With Memory Consolidation Approach.

    Science.gov (United States)

    Nasir, Jauwairia; Yoo, Yong-Ho; Kim, Deok-Hwa; Kim, Jong-Hwan; Nasir, Jauwairia; Yong-Ho Yoo; Deok-Hwa Kim; Jong-Hwan Kim; Nasir, Jauwairia; Yoo, Yong-Ho; Kim, Deok-Hwa; Kim, Jong-Hwan

    2018-06-01

    Memory modeling has been a popular topic of research for improving the performance of autonomous agents in cognition related problems. Apart from learning distinct experiences correctly, significant or recurring experiences are expected to be learned better and be retrieved easier. In order to achieve this objective, this paper proposes a user preference-based dual-memory adaptive resonance theory network model, which makes use of a user preference to encode memories with various strengths and to learn and forget at various rates. Over a period of time, memories undergo a consolidation-like process at a rate proportional to the user preference at the time of encoding and the frequency of recall of a particular memory. Consolidated memories are easier to recall and are more stable. This dual-memory neural model generates distinct episodic memories and a flexible semantic-like memory component. This leads to an enhanced retrieval mechanism of experiences through two routes. The simulation results are presented to evaluate the proposed memory model based on various kinds of cues over a number of trials. The experimental results on Mybot are also presented. The results verify that not only are distinct experiences learned correctly but also that experiences associated with higher user preference and recall frequency are consolidated earlier. Thus, these experiences are recalled more easily relative to the unconsolidated experiences.

  20. Tweaking the Four-Component Model

    Science.gov (United States)

    Curzer, Howard J.

    2014-01-01

    By maintaining that moral functioning depends upon four components (sensitivity, judgment, motivation, and character), the Neo-Kohlbergian account of moral functioning allows for uneven moral development within individuals. However, I argue that the four-component model does not go far enough. I offer a more accurate account of moral functioning…

  1. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    Science.gov (United States)

    Simon, Donald L.; Rinehart, Aidan Walker

    2015-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  2. Event-Based Conceptual Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    The paper demonstrates that a wide variety of event-based modeling approaches are based on special cases of the same general event concept, and that the general event concept can be used to unify the otherwise unrelated fields of information modeling and process modeling. A set of event......-based modeling approaches are analyzed and the results are used to formulate a general event concept that can be used for unifying the seemingly unrelated event concepts. Events are characterized as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms...... of information structures. The general event concept can be used to guide systems analysis and design and to improve modeling approaches....

  3. Modelling of ductile and cleavage fracture by local approach

    International Nuclear Information System (INIS)

    Samal, M.K.; Dutta, B.K.; Kushwaha, H.S.

    2000-08-01

    This report describes the modelling of ductile and cleavage fracture processes by local approach. It is now well known that the conventional fracture mechanics method based on single parameter criteria is not adequate to model the fracture processes. It is because of the existence of effect of size and geometry of flaw, loading type and rate on the fracture resistance behaviour of any structure. Hence, it is questionable to use same fracture resistance curves as determined from standard tests in the analysis of real life components because of existence of all the above effects. So, there is need to have a method in which the parameters used for the analysis will be true material properties, i.e. independent of geometry and size. One of the solutions to the above problem is the use of local approaches. These approaches have been extensively studied and applied to different materials (including SA33 Gr.6) in this report. Each method has been studied and reported in a separate section. This report has been divided into five sections. Section-I gives a brief review of the fundamentals of fracture process. Section-II deals with modelling of ductile fracture by locally uncoupled type of models. In this section, the critical cavity growth parameters of the different models have been determined for the primary heat transport (PHT) piping material of Indian pressurised heavy water reactor (PHWR). A comparative study has been done among different models. The dependency of the critical parameters on stress triaxiality factor has also been studied. It is observed that Rice and Tracey's model is the most suitable one. But, its parameters are not fully independent of triaxiality factor. For this purpose, a modification to Rice and Tracery's model is suggested in Section-III. Section-IV deals with modelling of ductile fracture process by locally coupled type of models. Section-V deals with the modelling of cleavage fracture process by Beremins model, which is based on Weibulls

  4. Modeling money demand components in Lebanon using autoregressive models

    International Nuclear Information System (INIS)

    Mourad, M.

    2008-01-01

    This paper analyses monetary aggregate in Lebanon and its different component methodology of AR model. Thirteen variables in monthly data have been studied for the period January 1990 through December 2005. Using the Augmented Dickey-Fuller (ADF) procedure, twelve variables are integrated at order 1, thus they need the filter (1-B)) to become stationary, however the variable X 1 3,t (claims on private sector) becomes stationary with the filter (1-B)(1-B 1 2) . The ex-post forecasts have been calculated for twelve horizons and for one horizon (one-step ahead forecast). The quality of forecasts has been measured using the MAPE criterion for which the forecasts are good because the MAPE values are lower. Finally, a pursuit of this research using the cointegration approach is proposed. (author)

  5. Three-dimensional NDE of VHTR core components via simulation-based testing. Final report

    International Nuclear Information System (INIS)

    Guzina, Bojan; Kunerth, Dennis

    2014-01-01

    A next generation, simulation-driven-and-enabled testing platform is developed for the 3D detection and characterization of defects and damage in nuclear graphite and composite structures in Very High Temperature Reactors (VHTRs). The proposed work addresses the critical need for the development of high-fidelity Non-Destructive Examination (NDE) technologies for as-manufactured and replaceable in-service VHTR components. Centered around the novel use of elastic (sonic and ultrasonic) waves, this project deploys a robust, non-iterative inverse solution for the 3D defect reconstruction together with a non-contact, laser-based approach to the measurement of experimental waveforms in VHTR core components. In particular, this research (1) deploys three-dimensional Scanning Laser Doppler Vibrometry (3D SLDV) as a means to accurately and remotely measure 3D displacement waveforms over the accessible surface of a VHTR core component excited by mechanical vibratory source; (2) implements a powerful new inverse technique, based on the concept of Topological Sensitivity (TS), for non-iterative elastic waveform tomography of internal defects - that permits robust 3D detection, reconstruction and characterization of discrete damage (e.g. holes and fractures) in nuclear graphite from limited-aperture NDE measurements; (3) implements state-of-the art computational (finite element) model that caters for accurately simulating elastic wave propagation in 3D blocks of nuclear graphite; (4) integrates the SLDV testing methodology with the TS imaging algorithm into a non-contact, high-fidelity NDE platform for the 3D reconstruction and characterization of defects and damage in VHTR core components; and (5) applies the proposed methodology to VHTR core component samples (both two- and three-dimensional) with a priori induced, discrete damage in the form of holes and fractures. Overall, the newly established SLDV-TS testing platform represents a next-generation NDE tool that surpasses

  6. Three-dimensional NDE of VHTR core components via simulation-based testing. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Guzina, Bojan [Univ. of Minnesota, Minneapolis, MN (United States); Kunerth, Dennis [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2014-09-30

    A next generation, simulation-driven-and-enabled testing platform is developed for the 3D detection and characterization of defects and damage in nuclear graphite and composite structures in Very High Temperature Reactors (VHTRs). The proposed work addresses the critical need for the development of high-fidelity Non-Destructive Examination (NDE) technologies for as-manufactured and replaceable in-service VHTR components. Centered around the novel use of elastic (sonic and ultrasonic) waves, this project deploys a robust, non-iterative inverse solution for the 3D defect reconstruction together with a non-contact, laser-based approach to the measurement of experimental waveforms in VHTR core components. In particular, this research (1) deploys three-dimensional Scanning Laser Doppler Vibrometry (3D SLDV) as a means to accurately and remotely measure 3D displacement waveforms over the accessible surface of a VHTR core component excited by mechanical vibratory source; (2) implements a powerful new inverse technique, based on the concept of Topological Sensitivity (TS), for non-iterative elastic waveform tomography of internal defects - that permits robust 3D detection, reconstruction and characterization of discrete damage (e.g. holes and fractures) in nuclear graphite from limited-aperture NDE measurements; (3) implements state-of-the art computational (finite element) model that caters for accurately simulating elastic wave propagation in 3D blocks of nuclear graphite; (4) integrates the SLDV testing methodology with the TS imaging algorithm into a non-contact, high-fidelity NDE platform for the 3D reconstruction and characterization of defects and damage in VHTR core components; and (5) applies the proposed methodology to VHTR core component samples (both two- and three-dimensional) with a priori induced, discrete damage in the form of holes and fractures. Overall, the newly established SLDV-TS testing platform represents a next-generation NDE tool that surpasses

  7. Sediment transport modelling in a distributed physically based hydrological catchment model

    Directory of Open Access Journals (Sweden)

    M. Konz

    2011-09-01

    Full Text Available Bedload sediment transport and erosion processes in channels are important components of water induced natural hazards in alpine environments. A raster based distributed hydrological model, TOPKAPI, has been further developed to support continuous simulations of river bed erosion and deposition processes. The hydrological model simulates all relevant components of the water cycle and non-linear reservoir methods are applied for water fluxes in the soil, on the ground surface and in the channel. The sediment transport simulations are performed on a sub-grid level, which allows for a better discretization of the channel geometry, whereas water fluxes are calculated on the grid level in order to be CPU efficient. Several transport equations as well as the effects of an armour layer on the transport threshold discharge are considered. Flow resistance due to macro roughness is also considered. The advantage of this approach is the integrated simulation of the entire basin runoff response combined with hillslope-channel coupled erosion and transport simulation. The comparison with the modelling tool SETRAC demonstrates the reliability of the modelling concept. The devised technique is very fast and of comparable accuracy to the more specialised sediment transport model SETRAC.

  8. A Short Interspersed Nuclear Element (SINE)-Based Real-Time PCR Approach to Detect and Quantify Porcine Component in Meat Products.

    Science.gov (United States)

    Zhang, Chi; Fang, Xin; Qiu, Haopu; Li, Ning

    2015-01-01

    Real-time PCR amplification of mitochondria gene could not be used for DNA quantification, and that of single copy DNA did not allow an ideal sensitivity. Moreover, cross-reactions among similar species were commonly observed in the published methods amplifying repetitive sequence, which hindered their further application. The purpose of this study was to establish a short interspersed nuclear element (SINE)-based real-time PCR approach having high specificity for species detection that could be used in DNA quantification. After massive screening of candidate Sus scrofa SINEs, one optimal combination of primers and probe was selected, which had no cross-reaction with other common meat species. LOD of the method was 44 fg DNA/reaction. Further, quantification tests showed this approach was practical in DNA estimation without tissue variance. Thus, this study provided a new tool for qualitative detection of porcine component, which could be promising in the QC of meat products.

  9. An Integrated Model of Co-ordinated Community-Based Care.

    Science.gov (United States)

    Scharlach, Andrew E; Graham, Carrie L; Berridge, Clara

    2015-08-01

    Co-ordinated approaches to community-based care are a central component of current and proposed efforts to help vulnerable older adults obtain needed services and supports and reduce unnecessary use of health care resources. This study examines ElderHelp Concierge Club, an integrated community-based care model that includes comprehensive personal and environmental assessment, multilevel care co-ordination, a mix of professional and volunteer service providers, and a capitated, income-adjusted fee model. Evaluation includes a retrospective study (n = 96) of service use and perceived program impact, and a prospective study (n = 21) of changes in participant physical and social well-being and health services utilization. Over the period of this study, participants showed greater mobility, greater ability to meet household needs, greater access to health care, reduced social isolation, reduced home hazards, fewer falls, and greater perceived ability to obtain assistance needed to age in place. This study provides preliminary evidence that an integrated multilevel care co-ordination approach may be an effective and efficient model for serving vulnerable community-based elders, especially low and moderate-income elders who otherwise could not afford the cost of care. The findings suggest the need for multisite controlled studies to more rigorously evaluate program impacts and the optimal mix of various program components. © The Author 2014. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Bank lending, expenditure components and inflation in South Africa: assessment from bounds testing approach

    Directory of Open Access Journals (Sweden)

    Emmanuel Ziramba

    2011-09-01

    Full Text Available This empirical study examines the long-run relationship between inflation and its determinants in South Africa. Three models of inflation involving money supply, bank credit and expenditure components are tested using the unrestricted error correction models of Pesaran et al. (2001. Unlike other existing studies on the subject, one of the models in the present study considers various components of real income as determinants. The disaggregated components are final consumption expenditure, expenditure on investment goods and exports. Based on ‘bounds’ testing, the presence of a long-run equilibrium relationship between inflation and its determinants is confirmed for all three models. The study found that the major causes of inflation in South Africa are import prices, real income, and final consumption expenditure. The relationship is elastic for import prices and final consumption expenditure. Monetary variables, money supply and bank credit are found to have an indirect effect on inflation.

  11. Robust and Effective Component-based Banknote Recognition by SURF Features.

    Science.gov (United States)

    Hasanuzzaman, Faiz M; Yang, Xiaodong; Tian, YingLi

    2011-01-01

    Camera-based computer vision technology is able to assist visually impaired people to automatically recognize banknotes. A good banknote recognition algorithm for blind or visually impaired people should have the following features: 1) 100% accuracy, and 2) robustness to various conditions in different environments and occlusions. Most existing algorithms of banknote recognition are limited to work for restricted conditions. In this paper we propose a component-based framework for banknote recognition by using Speeded Up Robust Features (SURF). The component-based framework is effective in collecting more class-specific information and robust in dealing with partial occlusion and viewpoint changes. Furthermore, the evaluation of SURF demonstrates its effectiveness in handling background noise, image rotation, scale, and illumination changes. To authenticate the robustness and generalizability of the proposed approach, we have collected a large dataset of banknotes from a variety of conditions including occlusion, cluttered background, rotation, and changes of illumination, scaling, and viewpoints. The proposed algorithm achieves 100% recognition rate on our challenging dataset.

  12. A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models

    Science.gov (United States)

    Brugnach, M.; Neilson, R.; Bolte, J.

    2001-12-01

    The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in

  13. Environmental risk assessment of biocidal products: identification of relevant components and reliability of a component-based mixture assessment.

    Science.gov (United States)

    Coors, Anja; Vollmar, Pia; Heim, Jennifer; Sacher, Frank; Kehrer, Anja

    2018-01-01

    Biocidal products are mixtures of one or more active substances (a.s.) and a broad range of formulation additives. There is regulatory guidance currently under development that will specify how the combined effects of the a.s. and any relevant formulation additives shall be considered in the environmental risk assessment of biocidal products. The default option is a component-based approach (CBA) by which the toxicity of the product is predicted from the toxicity of 'relevant' components using concentration addition. Hence, unequivocal and practicable criteria are required for identifying the 'relevant' components to ensure protectiveness of the CBA, while avoiding unnecessary workload resulting from including by default components that do not significantly contribute to the product toxicity. The present study evaluated a set of different criteria for identifying 'relevant' components using confidential information on the composition of 21 wood preservative products. Theoretical approaches were complemented by experimentally testing the aquatic toxicity of seven selected products. For three of the seven tested products, the toxicity was underestimated for the most sensitive endpoint (green algae) by more than factor 2 if only the a.s. were considered in the CBA. This illustrated the necessity of including at least some additives along with the a.s. Considering additives that were deemed 'relevant' by the tentatively established criteria reduced the underestimation of toxicity for two of the three products. A lack of data for one specific additive was identified as the most likely reason for the remaining toxicity underestimation of the third product. In three other products, toxicity was overestimated by more than factor 2, while prediction and observation fitted well for the seventh product. Considering all additives in the prediction increased only the degree of overestimation. Supported by theoretical calculations and experimental verifications, the present

  14. A general mixed boundary model reduction method for component mode synthesis

    International Nuclear Information System (INIS)

    Voormeeren, S N; Van der Valk, P L C; Rixen, D J

    2010-01-01

    A classic issue in component mode synthesis (CMS) methods is the choice for fixed or free boundary conditions at the interface degrees of freedom (DoF) and the associated vibration modes in the components reduction base. In this paper, a novel mixed boundary CMS method called the 'Mixed Craig-Bampton' method is proposed. The method is derived by dividing the substructure DoF into a set of internal DoF, free interface DoF and fixed interface DoF. To this end a simple but effective scheme is introduced that, for every pair of interface DoF, selects a free or fixed boundary condition for each DoF individually. Based on this selection a reduction basis is computed consisting of vibration modes, static constraint modes and static residual flexibility modes. In order to assemble the reduced substructures a novel mixed assembly procedure is developed. It is shown that this approach leads to relatively sparse reduced matrices, whereas other mixed boundary methods often lead to full matrices. As such, the Mixed Craig-Bampton method forms a natural generalization of the classic Craig-Bampton and more recent Dual Craig-Bampton methods. Finally, the method is applied to a finite element test model. Analysis reveals that the proposed method has comparable or better accuracy and superior versatility with respect to the existing methods.

  15. Probabilistic fatigue life prediction methodology for notched components based on simple smooth fatigue tests

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Z. R.; Li, Z. X. [Dept.of Engineering Mechanics, Jiangsu Key Laboratory of Engineering Mechanics, Southeast University, Nanjing (China); Hu, X. T.; Xin, P. P.; Song, Y. D. [State Key Laboratory of Mechanics and Control of Mechanical Structures, Nanjing University of Aeronautics and Astronautics, Nanjing (China)

    2017-01-15

    The methodology of probabilistic fatigue life prediction for notched components based on smooth specimens is presented. Weakestlink theory incorporating Walker strain model has been utilized in this approach. The effects of stress ratio and stress gradient have been considered. Weibull distribution and median rank estimator are used to describe fatigue statistics. Fatigue tests under different stress ratios were conducted on smooth and notched specimens of titanium alloy TC-1-1. The proposed procedures were checked against the test data of TC-1-1 notched specimens. Prediction results of 50 % survival rate are all within a factor of two scatter band of the test results.

  16. Bounded Rational Managers Struggle with Talent Management - An Agent-based Modelling Approach

    DEFF Research Database (Denmark)

    Adamsen, Billy; Thomsen, Svend Erik

    This study applies an agent-based modeling approach to explore some aspects of an important managerial task: finding and cultivating talented individuals capable of creating value for their organization at some future state. Given that the term talent in talent management is an empty signifier...... and its denotative meaning floating, we propose that bounded rational managers base their decisions on a simple heuristic, i.e. selecting and cultivating individuals so that their capabilities resemble their own capabilities the most (Adamsen 2015). We model the consequences of this talent management...... heuristic by varying the capabilities of today’s managers, which in turn impact which individuals will be selected as talent. We model the average level of capabilities and the distribution thereof in the sample where managers identify and select individuals from. We consider varying degrees of path...

  17. Independent component and pathway-based analysis of miRNA-regulated gene expression in a model of type 1 diabetes

    Directory of Open Access Journals (Sweden)

    Hagedorn Peter H

    2011-02-01

    Full Text Available Abstract Background Several approaches have been developed for miRNA target prediction, including methods that incorporate expression profiling. However the methods are still in need of improvements due to a high false discovery rate. So far, none of the methods have used independent component analysis (ICA. Here, we developed a novel target prediction method based on ICA that incorporates both seed matching and expression profiling of miRNA and mRNA expressions. The method was applied on a cellular model of type 1 diabetes. Results Microrray profiling identified eight miRNAs (miR-124/128/192/194/204/375/672/708 with differential expression. Applying ICA on the mRNA profiling data revealed five significant independent components (ICs correlating to the experimental conditions. The five ICs also captured the miRNA expressions by explaining >97% of their variance. By using ICA, seven of the eight miRNAs showed significant enrichment of sequence predicted targets, compared to only four miRNAs when using simple negative correlation. The ICs were enriched for miRNA targets that function in diabetes-relevant pathways e.g. type 1 and type 2 diabetes and maturity onset diabetes of the young (MODY. Conclusions In this study, ICA was applied as an attempt to separate the various factors that influence the mRNA expression in order to identify miRNA targets. The results suggest that ICA is better at identifying miRNA targets than negative correlation. Additionally, combining ICA and pathway analysis constitutes a means for prioritizing between the predicted miRNA targets. Applying the method on a model of type 1 diabetes resulted in identification of eight miRNAs that appear to affect pathways of relevance to disease mechanisms in diabetes.

  18. ROSMOD: A Toolsuite for Modeling, Generating, Deploying, and Managing Distributed Real-time Component-based Software using ROS

    Directory of Open Access Journals (Sweden)

    Pranav Srinivas Kumar

    2016-09-01

    Full Text Available This paper presents the Robot Operating System Model-driven development tool suite, (ROSMOD an integrated development environment for rapid prototyping component-based software for the Robot Operating System (ROS middleware. ROSMOD is well suited for the design, development and deployment of large-scale distributed applications on embedded devices. We present the various features of ROSMOD including the modeling language, the graphical user interface, code generators, and deployment infrastructure. We demonstrate the utility of this tool with a real-world case study: an Autonomous Ground Support Equipment (AGSE robot that was designed and prototyped using ROSMOD for the NASA Student Launch competition, 2014–2015.

  19. Prospective memory after moderate-to-severe traumatic brain injury: a multinomial modeling approach.

    Science.gov (United States)

    Pavawalla, Shital P; Schmitter-Edgecombe, Maureen; Smith, Rebekah E

    2012-01-01

    Prospective memory (PM), which can be understood as the processes involved in realizing a delayed intention, is consistently found to be impaired after a traumatic brain injury (TBI). Although PM can be empirically dissociated from retrospective memory, it inherently involves both a prospective component (i.e., remembering that an action needs to be carried out) and retrospective components (i.e., remembering what action needs to be executed and when). This study utilized a multinomial processing tree model to disentangle the prospective (that) and retrospective recognition (when) components underlying PM after moderate-to-severe TBI. Seventeen participants with moderate to severe TBI and 17 age- and education-matched control participants completed an event-based PM task that was embedded within an ongoing computer-based color-matching task. The multinomial processing tree modeling approach revealed a significant group difference in the prospective component, indicating that the control participants allocated greater preparatory attentional resources to the PM task compared to the TBI participants. Participants in the TBI group were also found to be significantly more impaired than controls in the when aspect of the retrospective component. These findings indicated that the TBI participants had greater difficulty allocating the necessary preparatory attentional resources to the PM task and greater difficulty discriminating between PM targets and nontargets during task execution, despite demonstrating intact posttest recall and/or recognition of the PM tasks and targets.

  20. A new model for reliability optimization of series-parallel systems with non-homogeneous components

    International Nuclear Information System (INIS)

    Feizabadi, Mohammad; Jahromi, Abdolhamid Eshraghniaye

    2017-01-01

    In discussions related to reliability optimization using redundancy allocation, one of the structures that has attracted the attention of many researchers, is series-parallel structure. In models previously presented for reliability optimization of series-parallel systems, there is a restricting assumption based on which all components of a subsystem must be homogeneous. This constraint limits system designers in selecting components and prevents achieving higher levels of reliability. In this paper, a new model is proposed for reliability optimization of series-parallel systems, which makes possible the use of non-homogeneous components in each subsystem. As a result of this flexibility, the process of supplying system components will be easier. To solve the proposed model, since the redundancy allocation problem (RAP) belongs to the NP-hard class of optimization problems, a genetic algorithm (GA) is developed. The computational results of the designed GA are indicative of high performance of the proposed model in increasing system reliability and decreasing costs. - Highlights: • In this paper, a new model is proposed for reliability optimization of series-parallel systems. • In the previous models, there is a restricting assumption based on which all components of a subsystem must be homogeneous. • The presented model provides a possibility for the subsystems’ components to be non- homogeneous in the required conditions. • The computational results demonstrate the high performance of the proposed model in improving reliability and reducing costs.

  1. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography

    International Nuclear Information System (INIS)

    Cai, C.; Rodet, T.; Mohammad-Djafari, A.; Legoupil, S.

    2013-01-01

    Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  2. Semi-mechanistic partial buffer approach to modeling pH, the buffer properties, and the distribution of ionic species in complex solutions.

    Science.gov (United States)

    Dougherty, Daniel P; Da Conceicao Neta, Edith Ramos; McFeeters, Roger F; Lubkin, Sharon R; Breidt, Frederick

    2006-08-09

    In many biological science and food processing applications, it is very important to control or modify pH. However, the complex, unknown composition of biological media and foods often limits the utility of purely theoretical approaches to modeling pH and calculating the distributions of ionizable species. This paper provides general formulas and efficient algorithms for predicting the pH, titration, ionic species concentrations, buffer capacity, and ionic strength of buffer solutions containing both defined and undefined components. A flexible, semi-mechanistic, partial buffering (SMPB) approach is presented that uses local polynomial regression to model the buffering influence of complex or undefined components in a solution, while identified components of known concentration are modeled using expressions based on extensions of the standard acid-base theory. The SMPB method is implemented in a freeware package, (pH)Tools, for use with Matlab. We validated the predictive accuracy of these methods by using strong acid titrations of cucumber slurries to predict the amount of a weak acid required to adjust pH to selected target values.

  3. Evaluation of cutting force uncertainty components in turning

    DEFF Research Database (Denmark)

    Axinte, Dragos Aurelian; Belluco, Walter; De Chiffre, Leonardo

    2000-01-01

    A procedure is proposed for the evaluation of those uncertainty components of a single cutting force measurement in turning that are related to the contributions of the dynamometer calibration and the cutting process itself. Based on an empirical model including errors form both sources......, the uncertainty for a single measurement of cutting force is presented, and expressions for the expected uncertainty vs. cutting parameters are proposed. This approach gives the possibility of evaluating cutting force uncertainty components in turning, for a defined range of cutting parameters, based on few...

  4. Physical and JIT Model Based Hybrid Modeling Approach for Building Thermal Load Prediction

    Science.gov (United States)

    Iino, Yutaka; Murai, Masahiko; Murayama, Dai; Motoyama, Ichiro

    Energy conservation in building fields is one of the key issues in environmental point of view as well as that of industrial, transportation and residential fields. The half of the total energy consumption in a building is occupied by HVAC (Heating, Ventilating and Air Conditioning) systems. In order to realize energy conservation of HVAC system, a thermal load prediction model for building is required. This paper propose a hybrid modeling approach with physical and Just-in-Time (JIT) model for building thermal load prediction. The proposed method has features and benefits such as, (1) it is applicable to the case in which past operation data for load prediction model learning is poor, (2) it has a self checking function, which always supervises if the data driven load prediction and the physical based one are consistent or not, so it can find if something is wrong in load prediction procedure, (3) it has ability to adjust load prediction in real-time against sudden change of model parameters and environmental conditions. The proposed method is evaluated with real operation data of an existing building, and the improvement of load prediction performance is illustrated.

  5. Going Multi-viral: Synthedemic Modelling of Internet-based Spreading Phenomena

    Directory of Open Access Journals (Sweden)

    Marily Nika

    2015-02-01

    Full Text Available Epidemics of a biological and technological nature pervade modern life. For centuries, scientific research focused on biological epidemics, with simple compartmental epidemiological models emerging as the dominant explanatory paradigm. Yet there has been limited translation of this effort to explain internet-based spreading phenomena. Indeed, single-epidemic models are inadequate to explain the multimodal nature of complex phenomena. In this paper we propose a novel paradigm for modelling internet-based spreading phenomena based on the composition of multiple compartmental epidemiological models. Our approach is inspired by Fourier analysis, but rather than trigonometric wave forms, our components are compartmental epidemiological models. We show results on simulated multiple epidemic data, swine flu data and BitTorrent downloads of a popular music artist. Our technique can characterise these multimodal data sets utilising a parsimonous number of subepidemic models.

  6. Maximum likelihood estimation of semiparametric mixture component models for competing risks data.

    Science.gov (United States)

    Choi, Sangbum; Huang, Xuelin

    2014-09-01

    In the analysis of competing risks data, the cumulative incidence function is a useful quantity to characterize the crude risk of failure from a specific event type. In this article, we consider an efficient semiparametric analysis of mixture component models on cumulative incidence functions. Under the proposed mixture model, latency survival regressions given the event type are performed through a class of semiparametric models that encompasses the proportional hazards model and the proportional odds model, allowing for time-dependent covariates. The marginal proportions of the occurrences of cause-specific events are assessed by a multinomial logistic model. Our mixture modeling approach is advantageous in that it makes a joint estimation of model parameters associated with all competing risks under consideration, satisfying the constraint that the cumulative probability of failing from any cause adds up to one given any covariates. We develop a novel maximum likelihood scheme based on semiparametric regression analysis that facilitates efficient and reliable estimation. Statistical inferences can be conveniently made from the inverse of the observed information matrix. We establish the consistency and asymptotic normality of the proposed estimators. We validate small sample properties with simulations and demonstrate the methodology with a data set from a study of follicular lymphoma. © 2014, The International Biometric Society.

  7. Modelling requirements for future assessments based on FEP analysis

    International Nuclear Information System (INIS)

    Locke, J.; Bailey, L.

    1998-01-01

    This report forms part of a suite of documents describing the Nirex model development programme. The programme is designed to provide a clear audit trail from the identification of significant features, events and processes (FEPs) to the models and modelling processes employed within a detailed safety assessment. A scenario approach to performance assessment has been adopted. It is proposed that potential evolutions of a deep geological radioactive waste repository can be represented by a base scenario and a number of variant scenarios. The base scenario is chosen to be broad-ranging and to represent the natural evolution of the repository system and its surrounding environment. The base scenario is defined to include all those FEPs that are certain to occur and those which are judged likely to occur for a significant period of the assessment timescale. The structuring of FEPs on a Master Directed Diagram (MDD) provides a systematic framework for identifying those FEPs that form part of the natural evolution of the system and those, which may define alternative potential evolutions of the repository system. In order to construct a description of the base scenario, FEPs have been grouped into a series of conceptual models. Conceptual models are groups of FEPs, identified from the MDD, representing a specific component or process within the disposal system. It has been found appropriate to define conceptual models in terms of the three main components of the disposal system: the repository engineered system, the surrounding geosphere and the biosphere. For each of these components, conceptual models provide a description of the relevant subsystem in terms of its initial characteristics, subsequent evolution and the processes affecting radionuclide transport for the groundwater and gas pathways. The aim of this document is to present the methodology that has been developed for deriving modelling requirements and to illustrate the application of the methodology by

  8. Modelling temporal variance of component temperatures and directional anisotropy over vegetated canopy

    Science.gov (United States)

    Bian, Zunjian; du, yongming; li, hua

    2016-04-01

    Land surface temperature (LST) as a key variable plays an important role on hydrological, meteorology and climatological study. Thermal infrared directional anisotropy is one of essential factors to LST retrieval and application on longwave radiance estimation. Many approaches have been proposed to estimate directional brightness temperatures (DBT) over natural and urban surfaces. While less efforts focus on 3-D scene and the surface component temperatures used in DBT models are quiet difficult to acquire. Therefor a combined 3-D model of TRGM (Thermal-region Radiosity-Graphics combined Model) and energy balance method is proposed in the paper for the attempt of synchronously simulation of component temperatures and DBT in the row planted canopy. The surface thermodynamic equilibrium can be final determined by the iteration strategy of TRGM and energy balance method. The combined model was validated by the top-of-canopy DBTs using airborne observations. The results indicated that the proposed model performs well on the simulation of directional anisotropy, especially the hotspot effect. Though we find that the model overestimate the DBT with Bias of 1.2K, it can be an option as a data reference to study temporal variance of component temperatures and DBTs when field measurement is inaccessible

  9. An agent-based approach to model land-use change at a regional scale

    NARCIS (Netherlands)

    Valbuena, D.F.; Verburg, P.H.; Bregt, A.K.; Ligtenberg, A.

    2010-01-01

    Land-use/cover change (LUCC) is a complex process that includes actors and factors at different social and spatial levels. A common approach to analyse and simulate LUCC as the result of individual decisions is agent-based modelling (ABM). However, ABM is often applied to simulate processes at local

  10. Synchrotron-Based Microspectroscopic Analysis of Molecular and Biopolymer Structures Using Multivariate Techniques and Advanced Multi-Components Modeling

    International Nuclear Information System (INIS)

    Yu, P.

    2008-01-01

    More recently, advanced synchrotron radiation-based bioanalytical technique (SRFTIRM) has been applied as a novel non-invasive analysis tool to study molecular, functional group and biopolymer chemistry, nutrient make-up and structural conformation in biomaterials. This novel synchrotron technique, taking advantage of bright synchrotron light (which is million times brighter than sunlight), is capable of exploring the biomaterials at molecular and cellular levels. However, with the synchrotron RFTIRM technique, a large number of molecular spectral data are usually collected. The objective of this article was to illustrate how to use two multivariate statistical techniques: (1) agglomerative hierarchical cluster analysis (AHCA) and (2) principal component analysis (PCA) and two advanced multicomponent modeling methods: (1) Gaussian and (2) Lorentzian multi-component peak modeling for molecular spectrum analysis of bio-tissues. The studies indicated that the two multivariate analyses (AHCA, PCA) are able to create molecular spectral corrections by including not just one intensity or frequency point of a molecular spectrum, but by utilizing the entire spectral information. Gaussian and Lorentzian modeling techniques are able to quantify spectral omponent peaks of molecular structure, functional group and biopolymer. By application of these four statistical methods of the multivariate techniques and Gaussian and Lorentzian modeling, inherent molecular structures, functional group and biopolymer onformation between and among biological samples can be quantified, discriminated and classified with great efficiency.

  11. Model-based approach for cyber-physical attack detection in water distribution systems.

    Science.gov (United States)

    Housh, Mashor; Ohar, Ziv

    2018-08-01

    Modern Water Distribution Systems (WDSs) are often controlled by Supervisory Control and Data Acquisition (SCADA) systems and Programmable Logic Controllers (PLCs) which manage their operation and maintain a reliable water supply. As such, and with the cyber layer becoming a central component of WDS operations, these systems are at a greater risk of being subjected to cyberattacks. This paper offers a model-based methodology based on a detailed hydraulic understanding of WDSs combined with an anomaly detection algorithm for the identification of complex cyberattacks that cannot be fully identified by hydraulically based rules alone. The results show that the proposed algorithm is capable of achieving the best-known performance when tested on the data published in the BATtle of the Attack Detection ALgorithms (BATADAL) competition (http://www.batadal.net). Copyright © 2018. Published by Elsevier Ltd.

  12. Using A Model-Based Systems Engineering Approach For Exploration Medical System Development

    Science.gov (United States)

    Hanson, A.; Mindock, J.; McGuire, K.; Reilly, J.; Cerro, J.; Othon, W.; Rubin, D.; Urbina, M.; Canga, M.

    2017-01-01

    NASA's Human Research Program's Exploration Medical Capabilities (ExMC) element is defining the medical system needs for exploration class missions. ExMC's Systems Engineering (SE) team will play a critical role in successful design and implementation of the medical system into exploration vehicles. The team's mission is to "Define, develop, validate, and manage the technical system design needed to implement exploration medical capabilities for Mars and test the design in a progression of proving grounds." Development of the medical system is being conducted in parallel with exploration mission architecture and vehicle design development. Successful implementation of the medical system in this environment will require a robust systems engineering approach to enable technical communication across communities to create a common mental model of the emergent engineering and medical systems. Model-Based Systems Engineering (MBSE) improves shared understanding of system needs and constraints between stakeholders and offers a common language for analysis. The ExMC SE team is using MBSE techniques to define operational needs, decompose requirements and architecture, and identify medical capabilities needed to support human exploration. Systems Modeling Language (SysML) is the specific language the SE team is utilizing, within an MBSE approach, to model the medical system functional needs, requirements, and architecture. Modeling methods are being developed through the practice of MBSE within the team, and tools are being selected to support meta-data exchange as integration points to other system models are identified. Use of MBSE is supporting the development of relationships across disciplines and NASA Centers to build trust and enable teamwork, enhance visibility of team goals, foster a culture of unbiased learning and serving, and be responsive to customer needs. The MBSE approach to medical system design offers a paradigm shift toward greater integration between

  13. A multi-objective constraint-based approach for modeling genome-scale microbial ecosystems.

    Directory of Open Access Journals (Sweden)

    Marko Budinich

    Full Text Available Interplay within microbial communities impacts ecosystems on several scales, and elucidation of the consequent effects is a difficult task in ecology. In particular, the integration of genome-scale data within quantitative models of microbial ecosystems remains elusive. This study advocates the use of constraint-based modeling to build predictive models from recent high-resolution -omics datasets. Following recent studies that have demonstrated the accuracy of constraint-based models (CBMs for simulating single-strain metabolic networks, we sought to study microbial ecosystems as a combination of single-strain metabolic networks that exchange nutrients. This study presents two multi-objective extensions of CBMs for modeling communities: multi-objective flux balance analysis (MO-FBA and multi-objective flux variability analysis (MO-FVA. Both methods were applied to a hot spring mat model ecosystem. As a result, multiple trade-offs between nutrients and growth rates, as well as thermodynamically favorable relative abundances at community level, were emphasized. We expect this approach to be used for integrating genomic information in microbial ecosystems. Following models will provide insights about behaviors (including diversity that take place at the ecosystem scale.

  14. A service based component model for composing and exploring MPSoC platforms

    DEFF Research Database (Denmark)

    Tranberg-Hansen, Anders Sejer; Madsen, Jan

    2008-01-01

    This paper presents an abstract service based modelling method for use in performance estimation and design space exploration of Multi Processor System On Chip (MPSoC) based systems. The method provides the infrastructure for composing abstract hardware and software models of stream based systems...... which can be used to produce detailed quantitative information regarding runtime properties of a given system through simulations. The method is based on a service oriented model of computation which is a modified version of Hierarchical Coloured Petri Nets.......This paper presents an abstract service based modelling method for use in performance estimation and design space exploration of Multi Processor System On Chip (MPSoC) based systems. The method provides the infrastructure for composing abstract hardware and software models of stream based systems...

  15. A Bayesian Approach to Model Selection in Hierarchical Mixtures-of-Experts Architectures.

    Science.gov (United States)

    Tanner, Martin A.; Peng, Fengchun; Jacobs, Robert A.

    1997-03-01

    There does not exist a statistical model that shows good performance on all tasks. Consequently, the model selection problem is unavoidable; investigators must decide which model is best at summarizing the data for each task of interest. This article presents an approach to the model selection problem in hierarchical mixtures-of-experts architectures. These architectures combine aspects of generalized linear models with those of finite mixture models in order to perform tasks via a recursive "divide-and-conquer" strategy. Markov chain Monte Carlo methodology is used to estimate the distribution of the architectures' parameters. One part of our approach to model selection attempts to estimate the worth of each component of an architecture so that relatively unused components can be pruned from the architecture's structure. A second part of this approach uses a Bayesian hypothesis testing procedure in order to differentiate inputs that carry useful information from nuisance inputs. Simulation results suggest that the approach presented here adheres to the dictum of Occam's razor; simple architectures that are adequate for summarizing the data are favored over more complex structures. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.

  16. Modeling spatial navigation in the presence of dynamic obstacles: a differential games approach.

    Science.gov (United States)

    Darekar, Anuja; Goussev, Valery; McFadyen, Bradford J; Lamontagne, Anouk; Fung, Joyce

    2018-03-01

    Obstacle circumvention strategies can be shaped by the dynamic interaction of an individual (evader) and an obstacle (pursuer). We have developed a mathematical model with predictive and emergent components, using experimental data from seven healthy young adults walking toward a target while avoiding collision with a stationary or moving obstacle (approaching head-on, or diagonally 30° left or right) in a virtual environment. Two linear properties from the predictive component enable the evader to predict the minimum distance between itself and the obstacle at all times, including the future intersection of trajectories. The emergent component uses the classical differential games model to solve for an optimal circumvention while reaching the target, wherein the locomotor strategy is influenced by the obstacle, target, and the evader velocity. Both model components were fitted to a different set of experimental data obtained from five poststroke and healthy participants to derive the minimum predicted distance (predictive component) and obstacle influence dimensions (emergent component) during circumvention. Minimum predicted distance between evader and pursuer was kept constant when the evader was closest to the obstacle in all participants. Obstacle influence dimensions varied depending on obstacle approach condition and preferred side of circumvention, reflecting differences in locomotor strategies between poststroke and healthy individuals. Additionally, important associations between model outputs and observed experimental outcomes were found. The model, supported by experimental data, suggests that both predictive and emergent processes can shape obstacle circumvention strategies in healthy and poststroke individuals. NEW & NOTEWORTHY Obstacle circumvention during goal-directed locomotion is modeled with a new mathematical approach comprising both predictive and emergent elements. The major novelty is using differential games solutions to illustrate the

  17. Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.

    Science.gov (United States)

    Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine

    2010-09-01

    Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.

  18. Rule-based approach to cognitive modeling of real-time decision making

    International Nuclear Information System (INIS)

    Thorndyke, P.W.

    1982-01-01

    Recent developments in the fields of cognitive science and artificial intelligence have made possible the creation of a new class of models of complex human behavior. These models, referred to as either expert or knowledge-based systems, describe the high-level cognitive processing undertaken by a skilled human to perform a complex, largely mental, task. Expert systems have been developed to provide simulations of skilled performance of a variety of tasks. These include problems of data interpretation, system monitoring and fault isolation, prediction, planning, diagnosis, and design. In general, such systems strive to produce prescriptive (error-free) behavior, rather than model descriptively the typical human's errorful behavior. However, some research has sought to develop descriptive models of human behavior using the same theoretical frameworks adopted by expert systems builders. This paper presents an overview of this theoretical framework and modeling approach, and indicates the applicability of such models to the development of a model of control room operators in a nuclear power plant. Such a model could serve several beneficial functions in plant design, licensing, and operation

  19. Automatic component calibration and error diagnostics for model-based accelerator control. Phase I final report

    International Nuclear Information System (INIS)

    Carl Stern; Martin Lee

    1999-01-01

    Phase I work studied the feasibility of developing software for automatic component calibration and error correction in beamline optics models. A prototype application was developed that corrects quadrupole field strength errors in beamline models

  20. Automatic component calibration and error diagnostics for model-based accelerator control. Phase I final report

    CERN Document Server

    Carl-Stern

    1999-01-01

    Phase I work studied the feasibility of developing software for automatic component calibration and error correction in beamline optics models. A prototype application was developed that corrects quadrupole field strength errors in beamline models.

  1. Two-component network model in voice identification technologies

    Directory of Open Access Journals (Sweden)

    Edita K. Kuular

    2018-03-01

    Full Text Available Among the most important parameters of biometric systems with voice modalities that determine their effectiveness, along with reliability and noise immunity, a speed of identification and verification of a person has been accentuated. This parameter is especially sensitive while processing large-scale voice databases in real time regime. Many research studies in this area are aimed at developing new and improving existing algorithms for presentation and processing voice records to ensure high performance of voice biometric systems. Here, it seems promising to apply a modern approach, which is based on complex network platform for solving complex massive problems with a large number of elements and taking into account their interrelationships. Thus, there are known some works which while solving problems of analysis and recognition of faces from photographs, transform images into complex networks for their subsequent processing by standard techniques. One of the first applications of complex networks to sound series (musical and speech analysis are description of frequency characteristics by constructing network models - converting the series into networks. On the network ontology platform a previously proposed technique of audio information representation aimed on its automatic analysis and speaker recognition has been developed. This implies converting information into the form of associative semantic (cognitive network structure with amplitude and frequency components both. Two speaker exemplars have been recorded and transformed into pertinent networks with consequent comparison of their topological metrics. The set of topological metrics for each of network models (amplitude and frequency one is a vector, and together  those combine a matrix, as a digital "network" voiceprint. The proposed network approach, with its sensitivity to personal conditions-physiological, psychological, emotional, might be useful not only for person identification

  2. QAM: PROPOSED MODEL FOR QUALITY ASSURANCE IN CBSS

    Directory of Open Access Journals (Sweden)

    Latika Kharb

    2015-08-01

    Full Text Available Component-based software engineering (CBSE / Component-Based Development (CBD lays emphasis on decomposition of the engineered systems into functional or logical components with well-defined interfaces used for communication across the components. Component-based software development approach is based on the idea to develop software systems by selecting appropriate off-the-shelf components and then to assemble them with a well-defined software architecture. Because the new software development paradigm is much different from the traditional approach, quality assurance for component-based software development is a new topic in the software engineering research community. Because component-based software systems are developed on an underlying process different from that of the traditional software, their quality assurance model should address both the process of components and the process of the overall system. Quality assurance for component-based software systems during the life cycle is used to analyze the components for achievement of high quality component-based software systems. Although some Quality assurance techniques and component based approach to software engineering have been studied, there is still no clear and well-defined standard or guidelines for component-based software systems. Therefore, identification of the quality assurance characteristics, quality assurance models, quality assurance tools and quality assurance metrics, are under urgent need. As a major contribution in this paper, I have proposed QAM: Quality Assurance Model for component-based software development, which covers component requirement analysis, component development, component certification, component architecture design, integration, testing, and maintenance.

  3. Model-Based Method for Sensor Validation

    Science.gov (United States)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  4. A social marketing approach to implementing evidence-based practice in VHA QUERI: the TIDES depression collaborative care model

    Science.gov (United States)

    2009-01-01

    Abstract Collaborative care models for depression in primary care are effective and cost-effective, but difficult to spread to new sites. Translating Initiatives for Depression into Effective Solutions (TIDES) is an initiative to promote evidence-based collaborative care in the U.S. Veterans Health Administration (VHA). Social marketing applies marketing techniques to promote positive behavior change. Described in this paper, TIDES used a social marketing approach to foster national spread of collaborative care models. TIDES social marketing approach The approach relied on a sequential model of behavior change and explicit attention to audience segmentation. Segments included VHA national leadership, Veterans Integrated Service Network (VISN) regional leadership, facility managers, frontline providers, and veterans. TIDES communications, materials and messages targeted each segment, guided by an overall marketing plan. Results Depression collaborative care based on the TIDES model was adopted by VHA as part of the new Primary Care Mental Health Initiative and associated policies. It is currently in use in more than 50 primary care practices across the United States, and continues to spread, suggesting success for its social marketing-based dissemination strategy. Discussion and conclusion Development, execution and evaluation of the TIDES marketing effort shows that social marketing is a promising approach for promoting implementation of evidence-based interventions in integrated healthcare systems. PMID:19785754

  5. Structural Reliability Methods for Wind Power Converter System Component Reliability Assessment

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    Wind power converter systems are essential subsystems in both off-shore and on-shore wind turbines. It is the main interface between generator and grid connection. This system is affected by numerous stresses where the main contributors might be defined as vibration and temperature loadings....... The temperature variations induce time-varying stresses and thereby fatigue loads. A probabilistic model is used to model fatigue failure for an electrical component in the power converter system. This model is based on a linear damage accumulation and physics of failure approaches, where a failure criterion...... is defined by the threshold model. The attention is focused on crack propagation in solder joints of electrical components due to the temperature loadings. Structural Reliability approaches are used to incorporate model, physical and statistical uncertainties. Reliability estimation by means of structural...

  6. Temporal gravity field modeling based on least square collocation with short-arc approach

    Science.gov (United States)

    ran, jiangjun; Zhong, Min; Xu, Houze; Liu, Chengshu; Tangdamrongsub, Natthachet

    2014-05-01

    After the launch of the Gravity Recovery And Climate Experiment (GRACE) in 2002, several research centers have attempted to produce the finest gravity model based on different approaches. In this study, we present an alternative approach to derive the Earth's gravity field, and two main objectives are discussed. Firstly, we seek the optimal method to estimate the accelerometer parameters, and secondly, we intend to recover the monthly gravity model based on least square collocation method. The method has been paid less attention compared to the least square adjustment method because of the massive computational resource's requirement. The positions of twin satellites are treated as pseudo-observations and unknown parameters at the same time. The variance covariance matrices of the pseudo-observations and the unknown parameters are valuable information to improve the accuracy of the estimated gravity solutions. Our analyses showed that introducing a drift parameter as an additional accelerometer parameter, compared to using only a bias parameter, leads to a significant improvement of our estimated monthly gravity field. The gravity errors outside the continents are significantly reduced based on the selected set of the accelerometer parameters. We introduced the improved gravity model namely the second version of Institute of Geodesy and Geophysics, Chinese Academy of Sciences (IGG-CAS 02). The accuracy of IGG-CAS 02 model is comparable to the gravity solutions computed from the Geoforschungszentrum (GFZ), the Center for Space Research (CSR) and the NASA Jet Propulsion Laboratory (JPL). In term of the equivalent water height, the correlation coefficients over the study regions (the Yangtze River valley, the Sahara desert, and the Amazon) among four gravity models are greater than 0.80.

  7. A Fault Prognosis Strategy Based on Time-Delayed Digraph Model and Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Ningyun Lu

    2012-01-01

    Full Text Available Because of the interlinking of process equipments in process industry, event information may propagate through the plant and affect a lot of downstream process variables. Specifying the causality and estimating the time delays among process variables are critically important for data-driven fault prognosis. They are not only helpful to find the root cause when a plant-wide disturbance occurs, but to reveal the evolution of an abnormal event propagating through the plant. This paper concerns with the information flow directionality and time-delay estimation problems in process industry and presents an information synchronization technique to assist fault prognosis. Time-delayed mutual information (TDMI is used for both causality analysis and time-delay estimation. To represent causality structure of high-dimensional process variables, a time-delayed signed digraph (TD-SDG model is developed. Then, a general fault prognosis strategy is developed based on the TD-SDG model and principle component analysis (PCA. The proposed method is applied to an air separation unit and has achieved satisfying results in predicting the frequently occurred “nitrogen-block” fault.

  8. The "proactive" model of learning: Integrative framework for model-free and model-based reinforcement learning utilizing the associative learning-based proactive brain concept.

    Science.gov (United States)

    Zsuga, Judit; Biro, Klara; Papp, Csaba; Tajti, Gabor; Gesztelyi, Rudolf

    2016-02-01

    Reinforcement learning (RL) is a powerful concept underlying forms of associative learning governed by the use of a scalar reward signal, with learning taking place if expectations are violated. RL may be assessed using model-based and model-free approaches. Model-based reinforcement learning involves the amygdala, the hippocampus, and the orbitofrontal cortex (OFC). The model-free system involves the pedunculopontine-tegmental nucleus (PPTgN), the ventral tegmental area (VTA) and the ventral striatum (VS). Based on the functional connectivity of VS, model-free and model based RL systems center on the VS that by integrating model-free signals (received as reward prediction error) and model-based reward related input computes value. Using the concept of reinforcement learning agent we propose that the VS serves as the value function component of the RL agent. Regarding the model utilized for model-based computations we turned to the proactive brain concept, which offers an ubiquitous function for the default network based on its great functional overlap with contextual associative areas. Hence, by means of the default network the brain continuously organizes its environment into context frames enabling the formulation of analogy-based association that are turned into predictions of what to expect. The OFC integrates reward-related information into context frames upon computing reward expectation by compiling stimulus-reward and context-reward information offered by the amygdala and hippocampus, respectively. Furthermore we suggest that the integration of model-based expectations regarding reward into the value signal is further supported by the efferent of the OFC that reach structures canonical for model-free learning (e.g., the PPTgN, VTA, and VS). (c) 2016 APA, all rights reserved).

  9. A model-based approach to identify binding sites in CLIP-Seq data.

    Directory of Open Access Journals (Sweden)

    Tao Wang

    Full Text Available Cross-linking immunoprecipitation coupled with high-throughput sequencing (CLIP-Seq has made it possible to identify the targeting sites of RNA-binding proteins in various cell culture systems and tissue types on a genome-wide scale. Here we present a novel model-based approach (MiClip to identify high-confidence protein-RNA binding sites from CLIP-seq datasets. This approach assigns a probability score for each potential binding site to help prioritize subsequent validation experiments. The MiClip algorithm has been tested in both HITS-CLIP and PAR-CLIP datasets. In the HITS-CLIP dataset, the signal/noise ratios of miRNA seed motif enrichment produced by the MiClip approach are between 17% and 301% higher than those by the ad hoc method for the top 10 most enriched miRNAs. In the PAR-CLIP dataset, the MiClip approach can identify ∼50% more validated binding targets than the original ad hoc method and two recently published methods. To facilitate the application of the algorithm, we have released an R package, MiClip (http://cran.r-project.org/web/packages/MiClip/index.html, and a public web-based graphical user interface software (http://galaxy.qbrc.org/tool_runner?tool_id=mi_clip for customized analysis.

  10. An open, component-based information infrastructure for integrated health information networks.

    Science.gov (United States)

    Tsiknakis, Manolis; Katehakis, Dimitrios G; Orphanoudakis, Stelios C

    2002-12-18

    A fundamental requirement for achieving continuity of care is the seamless sharing of multimedia clinical information. Different technological approaches can be adopted for enabling the communication and sharing of health record segments. In the context of the emerging global information society, the creation of and access to the integrated electronic health record (I-EHR) of a citizen has been assigned high priority in many countries. This requirement is complementary to an overall requirement for the creation of a health information infrastructure (HII) to support the provision of a variety of health telematics and e-health services. In developing a regional or national HII, the components or building blocks that make up the overall information system ought to be defined and an appropriate component architecture specified. This paper discusses current international priorities and trends in developing the HII. It presents technological challenges and alternative approaches towards the creation of an I-EHR, being the aggregation of health data created during all interactions of an individual with the healthcare system. It also presents results from an ongoing Research and Development (R&D) effort towards the implementation of the HII in HYGEIAnet, the regional health information network of Crete, Greece, using a component-based software engineering approach. Critical design decisions and related trade-offs, involved in the process of component specification and development, are also discussed and the current state of development of an I-EHR service is presented. Finally, Human Computer Interaction (HCI) and security issues, which are important for the deployment and use of any I-EHR service, are considered.

  11. A Knowledge Model Sharing Based Approach to Privacy-Preserving Data Mining

    OpenAIRE

    Hongwei Tian; Weining Zhang; Shouhuai Xu; Patrick Sharkey

    2012-01-01

    Privacy-preserving data mining (PPDM) is an important problem and is currently studied in three approaches: the cryptographic approach, the data publishing, and the model publishing. However, each of these approaches has some problems. The cryptographic approach does not protect privacy of learned knowledge models and may have performance and scalability issues. The data publishing, although is popular, may suffer from too much utility loss for certain types of data mining applications. The m...

  12. Modeling the Internet of Things, Self-Organizing and Other Complex Adaptive Communication Networks: A Cognitive Agent-Based Computing Approach.

    Directory of Open Access Journals (Sweden)

    Samreen Laghari

    Full Text Available Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT implies an inherent difficulty in modeling problems.It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS. The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC framework to model a Complex communication network problem.We use Exploratory Agent-based Modeling (EABM, as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy.The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach.

  13. Modeling the Internet of Things, Self-Organizing and Other Complex Adaptive Communication Networks: A Cognitive Agent-Based Computing Approach.

    Science.gov (United States)

    Laghari, Samreen; Niazi, Muaz A

    2016-01-01

    Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach.

  14. Principal component and spatial correlation analysis of spectroscopic-imaging data in scanning probe microscopy

    International Nuclear Information System (INIS)

    Jesse, Stephen; Kalinin, Sergei V

    2009-01-01

    An approach for the analysis of multi-dimensional, spectroscopic-imaging data based on principal component analysis (PCA) is explored. PCA selects and ranks relevant response components based on variance within the data. It is shown that for examples with small relative variations between spectra, the first few PCA components closely coincide with results obtained using model fitting, and this is achieved at rates approximately four orders of magnitude faster. For cases with strong response variations, PCA allows an effective approach to rapidly process, de-noise, and compress data. The prospects for PCA combined with correlation function analysis of component maps as a universal tool for data analysis and representation in microscopy are discussed.

  15. Statistical approach for uncertainty quantification of experimental modal model parameters

    DEFF Research Database (Denmark)

    Luczak, M.; Peeters, B.; Kahsin, M.

    2014-01-01

    Composite materials are widely used in manufacture of aerospace and wind energy structural components. These load carrying structures are subjected to dynamic time-varying loading conditions. Robust structural dynamics identification procedure impose tight constraints on the quality of modal models...... represent different complexity levels ranging from coupon, through sub-component up to fully assembled aerospace and wind energy structural components made of composite materials. The proposed method is demonstrated on two application cases of a small and large wind turbine blade........ This paper aims at a systematic approach for uncertainty quantification of the parameters of the modal models estimated from experimentally obtained data. Statistical analysis of modal parameters is implemented to derive an assessment of the entire modal model uncertainty measure. Investigated structures...

  16. Model-based magnetization retrieval from holographic phase images

    Energy Technology Data Exchange (ETDEWEB)

    Röder, Falk, E-mail: f.roeder@hzdr.de [Helmholtz-Zentrum Dresden-Rossendorf, Institut für Ionenstrahlphysik und Materialforschung, Bautzner Landstr. 400, D-01328 Dresden (Germany); Triebenberg Labor, Institut für Strukturphysik, Technische Universität Dresden, D-01062 Dresden (Germany); Vogel, Karin [Triebenberg Labor, Institut für Strukturphysik, Technische Universität Dresden, D-01062 Dresden (Germany); Wolf, Daniel [Helmholtz-Zentrum Dresden-Rossendorf, Institut für Ionenstrahlphysik und Materialforschung, Bautzner Landstr. 400, D-01328 Dresden (Germany); Triebenberg Labor, Institut für Strukturphysik, Technische Universität Dresden, D-01062 Dresden (Germany); Hellwig, Olav [Helmholtz-Zentrum Dresden-Rossendorf, Institut für Ionenstrahlphysik und Materialforschung, Bautzner Landstr. 400, D-01328 Dresden (Germany); AG Magnetische Funktionsmaterialien, Institut für Physik, Technische Universität Chemnitz, D-09126 Chemnitz (Germany); HGST, A Western Digital Company, 3403 Yerba Buena Rd., San Jose, CA 95135 (United States); Wee, Sung Hun [HGST, A Western Digital Company, 3403 Yerba Buena Rd., San Jose, CA 95135 (United States); Wicht, Sebastian; Rellinghaus, Bernd [IFW Dresden, Institute for Metallic Materials, P.O. Box 270116, D-01171 Dresden (Germany)

    2017-05-15

    The phase shift of the electron wave is a useful measure for the projected magnetic flux density of magnetic objects at the nanometer scale. More important for materials science, however, is the knowledge about the magnetization in a magnetic nano-structure. As demonstrated here, a dominating presence of stray fields prohibits a direct interpretation of the phase in terms of magnetization modulus and direction. We therefore present a model-based approach for retrieving the magnetization by considering the projected shape of the nano-structure and assuming a homogeneous magnetization therein. We apply this method to FePt nano-islands epitaxially grown on a SrTiO{sub 3} substrate, which indicates an inclination of their magnetization direction relative to the structural easy magnetic [001] axis. By means of this real-world example, we discuss prospects and limits of this approach. - Highlights: • Retrieval of the magnetization from holographic phase images. • Magnetostatic model constructed for a magnetic nano-structure. • Decomposition into homogeneously magnetized components. • Discretization of a each component by elementary cuboids. • Analytic solution for the phase of a magnetized cuboid considered. • Fitting a set of magnetization vectors to experimental phase images.

  17. Power Transformer Differential Protection Based on Neural Network Principal Component Analysis, Harmonic Restraint and Park's Plots

    OpenAIRE

    Tripathy, Manoj

    2012-01-01

    This paper describes a new approach for power transformer differential protection which is based on the wave-shape recognition technique. An algorithm based on neural network principal component analysis (NNPCA) with back-propagation learning is proposed for digital differential protection of power transformer. The principal component analysis is used to preprocess the data from power system in order to eliminate redundant information and enhance hidden pattern of differential current to disc...

  18. Cyclic plasticity and lifetime of the nickel-based Alloy C-263: Experiments, models and component simulation

    Directory of Open Access Journals (Sweden)

    Maier G.

    2014-01-01

    Full Text Available The present work deals with the thermomechanical fatigue and low-cycle fatigue behavior of C-263 in two different material conditions. Microstructural characteristics and fracture modes are investigated with light and electron microscopy. The experimental results indicate that viscoplastic deformations depend on the heat treatment or rather on the current state of the microstructure. The measured data are used to adjust the parameters of a Chaboche type model and a fracture-mechanics based model for fatigue lifetime prediction. The Chaboche model is able to describe the essential phenomena of time and temperature dependent cyclic plasticity including the complex cyclic hardening during thermo-cyclic loading of both material conditions with a unique set of material parameters. This could be achieved by including an additional internal variable into the Chaboche model which accounts for changes in the precipitation microstructure during high temperature loading. Furthermore, the proposed lifetime model is well suited for a common fatigue life prediction of both investigated heats. The deformation and lifetime models are implemented into a user defined material routine. In this work, the material routine is applied for the lifetime prediction of a critical power plant component using the finite element method.

  19. A Hybrid PCA-CART-MARS-Based Prognostic Approach of the Remaining Useful Life for Aircraft Engines

    Directory of Open Access Journals (Sweden)

    Fernando Sánchez Lasheras

    2015-03-01

    Full Text Available Prognostics is an engineering discipline that predicts the future health of a system. In this research work, a data-driven approach for prognostics is proposed. Indeed, the present paper describes a data-driven hybrid model for the successful prediction of the remaining useful life of aircraft engines. The approach combines the multivariate adaptive regression splines (MARS technique with the principal component analysis (PCA, dendrograms and classification and regression trees (CARTs. Elements extracted from sensor signals are used to train this hybrid model, representing different levels of health for aircraft engines. In this way, this hybrid algorithm is used to predict the trends of these elements. Based on this fitting, one can determine the future health state of a system and estimate its remaining useful life (RUL with accuracy. To evaluate the proposed approach, a test was carried out using aircraft engine signals collected from physical sensors (temperature, pressure, speed, fuel flow, etc.. Simulation results show that the PCA-CART-MARS-based approach can forecast faults long before they occur and can predict the RUL. The proposed hybrid model presents as its main advantage the fact that it does not require information about the previous operation states of the input variables of the engine. The performance of this model was compared with those obtained by other benchmark models (multivariate linear regression and artificial neural networks also applied in recent years for the modeling of remaining useful life. Therefore, the PCA-CART-MARS-based approach is very promising in the field of prognostics of the RUL for aircraft engines.

  20. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    Science.gov (United States)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.