WorldWideScience

Sample records for objects components models

  1. Connected Component Model for Multi-Object Tracking.

    Science.gov (United States)

    He, Zhenyu; Li, Xin; You, Xinge; Tao, Dacheng; Tang, Yuan Yan

    2016-08-01

    In multi-object tracking, it is critical to explore the data associations by exploiting the temporal information from a sequence of frames rather than the information from the adjacent two frames. Since straightforwardly obtaining data associations from multi-frames is an NP-hard multi-dimensional assignment (MDA) problem, most existing methods solve this MDA problem by either developing complicated approximate algorithms, or simplifying MDA as a 2D assignment problem based upon the information extracted only from adjacent frames. In this paper, we show that the relation between associations of two observations is the equivalence relation in the data association problem, based on the spatial-temporal constraint that the trajectories of different objects must be disjoint. Therefore, the MDA problem can be equivalently divided into independent subproblems by equivalence partitioning. In contrast to existing works for solving the MDA problem, we develop a connected component model (CCM) by exploiting the constraints of the data association and the equivalence relation on the constraints. Based upon CCM, we can efficiently obtain the global solution of the MDA problem for multi-object tracking by optimizing a sequence of independent data association subproblems. Experiments on challenging public data sets demonstrate that our algorithm outperforms the state-of-the-art approaches.

  2. Repurposing learning object components

    NARCIS (Netherlands)

    Verbert, K.; Jovanovic, J.; Gasevic, D.; Duval, E.; Meersman, R.

    2005-01-01

    This paper presents an ontology-based framework for repurposing learning object components. Unlike the usual practice where learning object components are assembled manually, the proposed framework enables on-the-fly access and repurposing of learning object components. The framework supports two

  3. Modelling with Relational Calculus of Object and Component Systems - rCOS

    DEFF Research Database (Denmark)

    Chen, Zhenbang; Hannousse, Abdel Hakim; Hung, Dang Van

    2008-01-01

    This chapter presents a formalization of functional and behavioural requirements, and a refinement of requirements to a design for CoCoME using the Relational Calculus of Object and Component Systems (rCOS). We give a model of requirements based on an abstraction of the use cases described...... in Chapter 3.2. Then the refinement calculus of rCOS is used to derive design models corresponding to the top level designs of Chapter 3.4. We demonstrate how rCOS supports modelling different views and their relationships of the system and the separation of concerns in the development....

  4. A new multi-objective optimization model for preventive maintenance and replacement scheduling of multi-component systems

    Science.gov (United States)

    Moghaddam, Kamran S.; Usher, John S.

    2011-07-01

    In this article, a new multi-objective optimization model is developed to determine the optimal preventive maintenance and replacement schedules in a repairable and maintainable multi-component system. In this model, the planning horizon is divided into discrete and equally-sized periods in which three possible actions must be planned for each component, namely maintenance, replacement, or do nothing. The objective is to determine a plan of actions for each component in the system while minimizing the total cost and maximizing overall system reliability simultaneously over the planning horizon. Because of the complexity, combinatorial and highly nonlinear structure of the mathematical model, two metaheuristic solution methods, generational genetic algorithm, and a simulated annealing are applied to tackle the problem. The Pareto optimal solutions that provide good tradeoffs between the total cost and the overall reliability of the system can be obtained by the solution approach. Such a modeling approach should be useful for maintenance planners and engineers tasked with the problem of developing recommended maintenance plans for complex systems of components.

  5. Object-oriented biomedical system modelling--the language.

    Science.gov (United States)

    Hakman, M; Groth, T

    1999-11-01

    The paper describes a new object-oriented biomedical continuous system modelling language (OOBSML). It is fully object-oriented and supports model inheritance, encapsulation, and model component instantiation and behaviour polymorphism. Besides the traditional differential and algebraic equation expressions the language includes also formal expressions for documenting models and defining model quantity types and quantity units. It supports explicit definition of model input-, output- and state quantities, model components and component connections. The OOBSML model compiler produces self-contained, independent, executable model components that can be instantiated and used within other OOBSML models and/or stored within model and model component libraries. In this way complex models can be structured as multilevel, multi-component model hierarchies. Technically the model components produced by the OOBSML compiler are executable computer code objects based on distributed object and object request broker technology. This paper includes both the language tutorial and the formal language syntax and semantic description.

  6. Layout design of user interface components with multiple objectives

    Directory of Open Access Journals (Sweden)

    Peer S.K.

    2004-01-01

    Full Text Available A multi-goal layout problem may be formulated as a Quadratic Assignment model, considering multiple goals (or factors, both qualitative and quantitative in the objective function. The facilities layout problem, in general, varies from the location and layout of facilities in manufacturing plant to the location and layout of textual and graphical user interface components in the human–computer interface. In this paper, we propose two alternate mathematical approaches to the single-objective layout model. The first one presents a multi-goal user interface component layout problem, considering the distance-weighted sum of congruent objectives of closeness relationships and the interactions. The second one considers the distance-weighted sum of congruent objectives of normalized weighted closeness relationships and normalized weighted interactions. The results of first approach are compared with that of an existing single objective model for example task under consideration. Then, the results of first approach and second approach of the proposed model are compared for the example task under consideration.

  7. Reusing balanced power flow object components for developing harmonic power flow

    Energy Technology Data Exchange (ETDEWEB)

    Nadarajah, S. [Peninsular Malaysia Electric Utility Co., Kuala Lumpur (Malaysia). Tenaga Nasional Berhad; Nor, K.M.; Abdel-Akher, M. [Malaysia Univ., Kuala Lumpur (Malaysia). Dept. of Electrical Engineering

    2005-07-01

    Harmonic power flows are used to examine the effects of nonlinear loads on power systems. In this paper, component technology was re-used for the development of a harmonic power flow. The object-oriented power system model (OO-PSM) was developed separately from a solution algorithm. Nodes, lines, and transformers were modelled as entity objects by classes. Power flow solution algorithms were modelled as control objects and encapsulated inside independent software components within the power system component software architecture (PS-COM). Both the OO-PSM and the PS-COM of the balanced power flow were re-used for developing the proposed harmonic power flow. A no-interaction hypothesis was used to consider both fundamental voltages and nonlinear device data dependence. A direct solution voltage node method was also used. The accuracy of the method was demonstrated using IEEE 14 bus and 30 bus test systems. It was concluded that component technology can be used to develop harmonic power flow programs. 7 refs., 2 tabs., 9 figs.

  8. Applying of component system development in object methodology

    Directory of Open Access Journals (Sweden)

    Milan Mišovič

    2013-01-01

    software system and referred to as software alliance.In both of these mentioned publications there is delivered ​​deep philosophy of relevant issues relating to SWC / SWA as creating copies of components (cloning, the establishment and destruction of components at software run-time (dynamic reconfiguration, cooperation of autonomous components, programmable management of components interface in depending on internal components functionality and customer requirements (functionality, security, versioning.Nevertheless, even today we can meet numerous cases of SWC / SWA existence, with a highly developed architecture that is accepting vast majority of these requests. On the other hand, in the development practice of component-based systems with a dynamic architecture (i.e. architecture with dynamic reconfiguration, and finally with a mobile architecture (i.e. architecture with dynamic component mobility confirms the inadequacy of the design methods contained in UML 2.0. It proves especially the dissertation thesis (Rych, Weis, 2008. Software Engineering currently has two different approaches to systems SWC / SWA. The first approach is known as component-oriented software development CBD (Component based Development. According to (Szyper, 2002 that is a collection of CBD methodologies that are heavily focused on the setting up and software components re-usability within the architecture. Although CBD does not show high theoretical approach, nevertheless, it is classified under the general evolution of SDP (Software Development Process, see (Sommer, 2010 as one of its two dominant directions.From a structural point of view, a software system consists of self-contained, interoperable architectural units – components based on well-defined interfaces. Classical procedural object-oriented methodologies significantly do not use the component meta-models, based on which the target component systems are formed, then. Component meta-models describe the syntax, semantics of

  9. SU-E-I-58: Objective Models of Breast Shape Undergoing Mammography and Tomosynthesis Using Principal Component Analysis.

    Science.gov (United States)

    Feng, Ssj; Sechopoulos, I

    2012-06-01

    To develop an objective model of the shape of the compressed breast undergoing mammographic or tomosynthesis acquisition. Automated thresholding and edge detection was performed on 984 anonymized digital mammograms (492 craniocaudal (CC) view mammograms and 492 medial lateral oblique (MLO) view mammograms), to extract the edge of each breast. Principal Component Analysis (PCA) was performed on these edge vectors to identify a limited set of parameters and eigenvectors that. These parameters and eigenvectors comprise a model that can be used to describe the breast shapes present in acquired mammograms and to generate realistic models of breasts undergoing acquisition. Sample breast shapes were then generated from this model and evaluated. The mammograms in the database were previously acquired for a separate study and authorized for use in further research. The PCA successfully identified two principal components and their corresponding eigenvectors, forming the basis for the breast shape model. The simulated breast shapes generated from the model are reasonable approximations of clinically acquired mammograms. Using PCA, we have obtained models of the compressed breast undergoing mammographic or tomosynthesis acquisition based on objective analysis of a large image database. Up to now, the breast in the CC view has been approximated as a semi-circular tube, while there has been no objectively-obtained model for the MLO view breast shape. Such models can be used for various breast imaging research applications, such as x-ray scatter estimation and correction, dosimetry estimates, and computer-aided detection and diagnosis. © 2012 American Association of Physicists in Medicine.

  10. NCWin — A Component Object Model (COM) for processing and visualizing NetCDF data

    Science.gov (United States)

    Liu, Jinxun; Chen, J.M.; Price, D.T.; Liu, S.

    2005-01-01

    NetCDF (Network Common Data Form) is a data sharing protocol and library that is commonly used in large-scale atmospheric and environmental data archiving and modeling. The NetCDF tool described here, named NCWin and coded with Borland C + + Builder, was built as a standard executable as well as a COM (component object model) for the Microsoft Windows environment. COM is a powerful technology that enhances the reuse of applications (as components). Environmental model developers from different modeling environments, such as Python, JAVA, VISUAL FORTRAN, VISUAL BASIC, VISUAL C + +, and DELPHI, can reuse NCWin in their models to read, write and visualize NetCDF data. Some Windows applications, such as ArcGIS and Microsoft PowerPoint, can also call NCWin within the application. NCWin has three major components: 1) The data conversion part is designed to convert binary raw data to and from NetCDF data. It can process six data types (unsigned char, signed char, short, int, float, double) and three spatial data formats (BIP, BIL, BSQ); 2) The visualization part is designed for displaying grid map series (playing forward or backward) with simple map legend, and displaying temporal trend curves for data on individual map pixels; and 3) The modeling interface is designed for environmental model development by which a set of integrated NetCDF functions is provided for processing NetCDF data. To demonstrate that the NCWin can easily extend the functions of some current GIS software and the Office applications, examples of calling NCWin within ArcGIS and MS PowerPoint for showing NetCDF map animations are given.

  11. An object model for beamline descriptions

    International Nuclear Information System (INIS)

    Hill, B.W.; Martono, H.; Gillespie, J.S.

    1997-01-01

    Translation of beamline model descriptions between different accelerator codes presents a unique challenge due to the different representations used for various elements and subsystems. These differences range from simple units conversions to more complex translations involving multiple beamline components. A representation of basic accelerator components is being developed in order to define a meta-structure from which beamline models, in different codes, can be described and to facilitate the translation of models between these codes. Sublines of basic components will be used to represent more complex beamline descriptions and bridge the gap between codes which may represent a beamline element as a single entity, and those which use multiple elements to describe the same physical device. A C++ object model for supporting this beamline description and a grammar for describing beamlines in terms of these components is being developed. The object model will support a common graphic user interface and translation filters for representing native beamline descriptions for a variety of accelerator codes. An overview of our work on the object model for beamline descriptions is presented here. copyright 1997 American Institute of Physics

  12. Hybrid polylingual object model: an efficient and seamless integration of Java and native components on the Dalvik virtual machine.

    Science.gov (United States)

    Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv

    2014-01-01

    JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded.

  13. Hybrid PolyLingual Object Model: An Efficient and Seamless Integration of Java and Native Components on the Dalvik Virtual Machine

    Directory of Open Access Journals (Sweden)

    Yukun Huang

    2014-01-01

    Full Text Available JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded.

  14. Monitoring distributed object and component communication

    NARCIS (Netherlands)

    Diakov, N.K.

    2004-01-01

    This thesis presents our work in the area of monitoring distributed software applications (DSAs). We produce three main results: (1) a design approach for building monitoring systems, (2) a design of a system for MOnitoring Distributed Object and Component Communication (MODOCC) behavior in

  15. Thermoluminescence dosimetry of electronic components from personal objects

    International Nuclear Information System (INIS)

    Beerten, Koen; Woda, Clemens; Vanhavere, Filip

    2009-01-01

    Owing to the existence of ceramic materials inside common personal objects such as cellular phones and USB flash drives, these objects may be very useful in emergency (accident) dosimetry. Here we will present initial results regarding the dosimetric properties as determined by thermoluminescence (TL) from two alumina-rich electronic components from a USB flash drive. The TL method was applied in order to investigate the potential of conventional TL equipment for such purposes. For comparison, the optically stimulated luminescence (OSL) of the components was investigated as well. The studied components are ceramic resonators and alumina-based substrates from electrical resistors. The results show that various TL-related properties such as fading, optical stability and zero-dose response are different for the two investigated components. On the basis of these properties, the ceramic resonator was selected for dose recovery tests using TL and OSL. The given dose could reliably be determined using both methods, assuming that prompt measurement and/or fading correction is possible.

  16. Color Independent Components Based SIFT Descriptors for Object/Scene Classification

    Science.gov (United States)

    Ai, Dan-Ni; Han, Xian-Hua; Ruan, Xiang; Chen, Yen-Wei

    In this paper, we present a novel color independent components based SIFT descriptor (termed CIC-SIFT) for object/scene classification. We first learn an efficient color transformation matrix based on independent component analysis (ICA), which is adaptive to each category in a database. The ICA-based color transformation can enhance contrast between the objects and the background in an image. Then we compute CIC-SIFT descriptors over all three transformed color independent components. Since the ICA-based color transformation can boost the objects and suppress the background, the proposed CIC-SIFT can extract more effective and discriminative local features for object/scene classification. The comparison is performed among seven SIFT descriptors, and the experimental classification results show that our proposed CIC-SIFT is superior to other conventional SIFT descriptors.

  17. Component Composability Issues in Object-Oriented Programming

    NARCIS (Netherlands)

    Aksit, Mehmet; Tekinerdogan, B.

    1997-01-01

    Building software from reusable components is considered important in reducing development costs. Object-oriented languages such as C++, Smalltalk and Java, however, are not capable of expressing certain aspects of applications in a composable way. Software engineers may experience difficulties in

  18. Refinement and verification in component-based model-driven design

    DEFF Research Database (Denmark)

    Chen, Zhenbang; Liu, Zhiming; Ravn, Anders Peter

    2009-01-01

    Modern software development is complex as it has to deal with many different and yet related aspects of applications. In practical software engineering this is now handled by a UML-like modelling approach in which different aspects are modelled by different notations. Component-based and object-o...... be integrated in computer-aided software engineering (CASE) tools for adding formally supported checking, transformation and generation facilities.......Modern software development is complex as it has to deal with many different and yet related aspects of applications. In practical software engineering this is now handled by a UML-like modelling approach in which different aspects are modelled by different notations. Component-based and object...

  19. Applying of component system development in object methodology, case study

    Directory of Open Access Journals (Sweden)

    Milan Mišovič

    2013-01-01

    Full Text Available To create computarization target software as a component system has been a very strong requirement for the last 20 years of software developing. Finally, the architectural components are self-contained units, presenting not only partial and overall system behavior, but also cooperating with each other on the basis of their interfaces. Among others, components have allowed flexible modification of processes the behavior of which is the foundation of components behavior without changing the life of the component system. On the other hand, the component system makes it possible, at design time, to create numerous new connections between components and thus creating modified system behaviors. This all enables the company management to perform, at design time, required behavioral changes of processes in accordance with the requirements of changing production and market.The development of software which is generally referred to as SDP (Software Development Process contains two directions. The first one, called CBD (Component–Based Development, is dedicated to the development of component–based systems CBS (Component–based System, the second target is the development of software under the influence of SOA (Service–Oriented Architecture. Both directions are equipped with their different development methodologies. The subject of this paper is only the first direction and application of development of component–based systems in its object–oriented methodologies. The requirement of today is to carry out the development of component-based systems in the framework of developed object–oriented methodologies precisely in the way of a dominant style. In some of the known methodologies, however, this development is not completely transparent and is not even recognized as dominant. In some cases, it is corrected by the special meta–integration models of component system development into an object methodology.This paper presents a case study

  20. Principles of object-oriented modeling and simulation with Modelica 2.1

    CERN Document Server

    Fritzson, Peter

    2004-01-01

    A timely introduction to the latest modeling and simulation techniques. Object-oriented modeling is a fast-growing area of modeling and simulation that provides a structured, computer-supported way of doing mathematical and equation-based modeling. Modelica is today's most promising modeling language in that it effectively unifies and generalizes previous object-oriented modeling languages and provides a sound basis for the basic concepts. Principles of Object-Oriented Modeling and Simulation with Modelica 2.1 introduces the latest methods of object-oriented component-based system modeling and

  1. Apricot - An Object-Oriented Modeling Language for Hybrid Systems

    OpenAIRE

    Fang, Huixing; Zhu, Huibiao; Shi, Jianqi

    2013-01-01

    We propose Apricot as an object-oriented language for modeling hybrid systems. The language combines the features in domain specific language and object-oriented language, that fills the gap between design and implementation, as a result, we put forward the modeling language with simple and distinct syntax, structure and semantics. In addition, we introduce the concept of design by convention into Apricot.As the characteristic of object-oriented and the component architecture in Apricot, we c...

  2. Using the object modeling system for hydrological model development and application

    Directory of Open Access Journals (Sweden)

    S. Kralisch

    2005-01-01

    Full Text Available State of the art challenges in sustainable management of water resources have created demand for integrated, flexible and easy to use hydrological models which are able to simulate the quantitative and qualitative aspects of the hydrological cycle with a sufficient degree of certainty. Existing models which have been de-veloped to fit these needs are often constrained to specific scales or purposes and thus can not be easily adapted to meet different challenges. As a solution for flexible and modularised model development and application, the Object Modeling System (OMS has been developed in a joint approach by the USDA-ARS, GPSRU (Fort Collins, CO, USA, USGS (Denver, CO, USA, and the FSU (Jena, Germany. The OMS provides a modern modelling framework which allows the implementation of single process components to be compiled and applied as custom tailored model assemblies. This paper describes basic principles of the OMS and its main components and explains in more detail how the problems during coupling of models or model components are solved inside the system. It highlights the integration of different spatial and temporal scales by their representation as spatial modelling entities embedded into time compound components. As an exam-ple the implementation of the hydrological model J2000 is discussed.

  3. Forging of metallic nano-objects for the fabrication of submicron-size components

    International Nuclear Information System (INIS)

    Roesler, J; Mukherji, D; Schock, K; Kleindiek, S

    2007-01-01

    In recent years, nanoscale fabrication has developed considerably, but the fabrication of free-standing nanosize components is still a great challenge. The fabrication of metallic nanocomponents utilizing three basic steps is demonstrated here. First, metallic alloys are used as factories to produce a metallic raw stock of nano-objects/nanoparticles in large numbers. These objects are then isolated from the powder containing thousands of such objects inside a scanning electron microscope using manipulators, and placed on a micro-anvil or a die. Finally, the shape of the individual nano-object is changed by nanoforging using a microhammer. In this way free-standing, high-strength, metallic nano-objects may be shaped into components with dimensions in the 100 nm range. By assembling such nanocomponents, high-performance microsystems can be fabricated, which are truly in the micrometre scale (the size ratio of a system to its component is typically 10:1)

  4. Independent Component Analysis in Multimedia Modeling

    DEFF Research Database (Denmark)

    Larsen, Jan

    2003-01-01

    largely refers to text, images/video, audio and combinations of such data. We review a number of applications within single and combined media with the hope that this might provide inspiration for further research in this area. Finally, we provide a detailed presentation of our own recent work on modeling......Modeling of multimedia and multimodal data becomes increasingly important with the digitalization of the world. The objective of this paper is to demonstrate the potential of independent component analysis and blind sources separation methods for modeling and understanding of multimedia data, which...

  5. Feature-based component model for design of embedded systems

    Science.gov (United States)

    Zha, Xuan Fang; Sriram, Ram D.

    2004-11-01

    An embedded system is a hybrid of hardware and software, which combines software's flexibility and hardware real-time performance. Embedded systems can be considered as assemblies of hardware and software components. An Open Embedded System Model (OESM) is currently being developed at NIST to provide a standard representation and exchange protocol for embedded systems and system-level design, simulation, and testing information. This paper proposes an approach to representing an embedded system feature-based model in OESM, i.e., Open Embedded System Feature Model (OESFM), addressing models of embedded system artifacts, embedded system components, embedded system features, and embedded system configuration/assembly. The approach provides an object-oriented UML (Unified Modeling Language) representation for the embedded system feature model and defines an extension to the NIST Core Product Model. The model provides a feature-based component framework allowing the designer to develop a virtual embedded system prototype through assembling virtual components. The framework not only provides a formal precise model of the embedded system prototype but also offers the possibility of designing variation of prototypes whose members are derived by changing certain virtual components with different features. A case study example is discussed to illustrate the embedded system model.

  6. An ontology for component-based models of water resource systems

    Science.gov (United States)

    Elag, Mostafa; Goodall, Jonathan L.

    2013-08-01

    Component-based modeling is an approach for simulating water resource systems where a model is composed of a set of components, each with a defined modeling objective, interlinked through data exchanges. Component-based modeling frameworks are used within the hydrologic, atmospheric, and earth surface dynamics modeling communities. While these efforts have been advancing, it has become clear that the water resources modeling community in particular, and arguably the larger earth science modeling community as well, faces a challenge of fully and precisely defining the metadata for model components. The lack of a unified framework for model component metadata limits interoperability between modeling communities and the reuse of models across modeling frameworks due to ambiguity about the model and its capabilities. To address this need, we propose an ontology for water resources model components that describes core concepts and relationships using the Web Ontology Language (OWL). The ontology that we present, which is termed the Water Resources Component (WRC) ontology, is meant to serve as a starting point that can be refined over time through engagement by the larger community until a robust knowledge framework for water resource model components is achieved. This paper presents the methodology used to arrive at the WRC ontology, the WRC ontology itself, and examples of how the ontology can aid in component-based water resources modeling by (i) assisting in identifying relevant models, (ii) encouraging proper model coupling, and (iii) facilitating interoperability across earth science modeling frameworks.

  7. Combining features from ERP components in single-trial EEG for discriminating four-category visual objects

    Science.gov (United States)

    Wang, Changming; Xiong, Shi; Hu, Xiaoping; Yao, Li; Zhang, Jiacai

    2012-10-01

    Categorization of images containing visual objects can be successfully recognized using single-trial electroencephalograph (EEG) measured when subjects view images. Previous studies have shown that task-related information contained in event-related potential (ERP) components could discriminate two or three categories of object images. In this study, we investigated whether four categories of objects (human faces, buildings, cats and cars) could be mutually discriminated using single-trial EEG data. Here, the EEG waveforms acquired while subjects were viewing four categories of object images were segmented into several ERP components (P1, N1, P2a and P2b), and then Fisher linear discriminant analysis (Fisher-LDA) was used to classify EEG features extracted from ERP components. Firstly, we compared the classification results using features from single ERP components, and identified that the N1 component achieved the highest classification accuracies. Secondly, we discriminated four categories of objects using combining features from multiple ERP components, and showed that combination of ERP components improved four-category classification accuracies by utilizing the complementarity of discriminative information in ERP components. These findings confirmed that four categories of object images could be discriminated with single-trial EEG and could direct us to select effective EEG features for classifying visual objects.

  8. Components of foreign object management as

    Directory of Open Access Journals (Sweden)

    V.Yu. Gordopolov

    2016-12-01

    Full Text Available In the article the components of foreign economic activity as objects of accounting and management, and analysis of current legislation and scientific literature that allowed classification form shapes and types of foreign economic activities that contribute to the planning process in the enterprise, as well as building an effective system of management and accounting, economic analysis and internal control. As part of the classification described basic forms, types of foreign trade, especially their implementation and legal regulation. On the basis of basic forms and types of foreign trade, set a number of problems conceptual-categorical apparatus applicable law. A nine treatments categories–operations of foreign economic activity of fixing specific legal acts. Operations of foreign economic activity (import transactions, export transactions, international transactions, securities, credit and foreign payment transactions, foreign rental operations, international leasing, foreign exchange transactions, foreign investments and operations associated with the joint activity identified as about objects accounting in foreign trade and is divided according to types of business entities (operating, financial and investment.

  9. Aquarius' Object-Oriented, Plug and Play Component-Based Flight Software

    Science.gov (United States)

    Murray, Alexander; Shahabuddin, Mohammad

    2013-01-01

    The Aquarius mission involves a combined radiometer and radar instrument in low-Earth orbit, providing monthly global maps of Sea Surface Salinity. Operating successfully in orbit since June, 2011, the spacecraft bus was furnished by the Argentine space agency, Comision Nacional de Actividades Espaciales (CONAE). The instrument, built jointly by NASA's Caltech/JPL and Goddard Space Flight Center, has been successfully producing expectation-exceeding data since it was powered on in August of 2011. In addition to the radiometer and scatterometer, the instrument contains an command & data-handling subsystem with a computer and flight software (FSW) that is responsible for managing the instrument, its operation, and its data. Aquarius' FSW is conceived and architected as a Component-based system, in which the running software consists of a set of Components, each playing a distinctive role in the subsystem, instantiated and connected together at runtime. Component architectures feature a well-defined set of interfaces between the Components, visible and analyzable at the architectural level (see [1]). As we will describe, this kind of an architecture offers significant advantages over more traditional FSW architectures, which often feature a monolithic runtime structure. Component-based software is enabled by Object-Oriented (OO) techniques and languages, the use of which again is not typical in space mission FSW. We will argue in this paper that the use of OO design methods and tools (especially the Unified Modeling Language), as well as the judicious usage of C++, are very well suited to FSW applications, and we will present Aquarius FSW, describing our methods, processes, and design, as a successful case in point.

  10. Fast grasping of unknown objects using principal component analysis

    Science.gov (United States)

    Lei, Qujiang; Chen, Guangming; Wisse, Martijn

    2017-09-01

    Fast grasping of unknown objects has crucial impact on the efficiency of robot manipulation especially subjected to unfamiliar environments. In order to accelerate grasping speed of unknown objects, principal component analysis is utilized to direct the grasping process. In particular, a single-view partial point cloud is constructed and grasp candidates are allocated along the principal axis. Force balance optimization is employed to analyze possible graspable areas. The obtained graspable area with the minimal resultant force is the best zone for the final grasping execution. It is shown that an unknown object can be more quickly grasped provided that the component analysis principle axis is determined using single-view partial point cloud. To cope with the grasp uncertainty, robot motion is assisted to obtain a new viewpoint. Virtual exploration and experimental tests are carried out to verify this fast gasping algorithm. Both simulation and experimental tests demonstrated excellent performances based on the results of grasping a series of unknown objects. To minimize the grasping uncertainty, the merits of the robot hardware with two 3D cameras can be utilized to suffice the partial point cloud. As a result of utilizing the robot hardware, the grasping reliance is highly enhanced. Therefore, this research demonstrates practical significance for increasing grasping speed and thus increasing robot efficiency under unpredictable environments.

  11. An ODP computational model of a cooperative binding object

    Science.gov (United States)

    Logé, Christophe; Najm, Elie; Chen, Ken

    1997-12-01

    A next generation of systems that should appear will have to manage simultaneously several geographically distributed users. These systems belong to the class of computer-supported cooperative work systems (CSCW). The development of such complex systems requires rigorous development methods and flexible open architectures. Open distributed processing (ODP) is a standardization effort that aims at providing such architectures. ODP features appropriate abstraction levels and a clear articulation between requirements, programming and infrastructure support. ODP advocates the use of formal methods for the specification of systems and components. The computational model, an object-based model, one of the abstraction levels identified within ODP, plays a central role in the global architecture. In this model, basic objects can be composed with communication and distribution abstractions (called binding objects) to form a computational specification of distributed systems, or applications. Computational specifications can then be mapped (in a mechanism akin to compilation) onto an engineering solution. We use an ODP-inspired method to computationally specify a cooperative system. We start from a general purpose component that we progressively refine into a collection of basic and binding objects. We focus on two issues of a co-authoring application, namely, dynamic reconfiguration and multiview synchronization. We discuss solutions for these issues and formalize them using the MT-LOTOS specification language that is currently studied in the ISO standardization formal description techniques group.

  12. Development of a cultural heritage object BIM model

    Science.gov (United States)

    Braila, Natalya; Vakhrusheva, Svetlana; Martynenko, Elena; Kisel, Tatyana

    2017-10-01

    The BIM technology during her creation has been aimed, first of all, at design and construction branch, but its application in the field of studying and operation of architectural heritage can essentially change and transfer this kind of activity to new qualitative level. The question of effective introduction of BIM technologies at the solution of administrative questions of operation and development of monuments of architecture is considered in article. Creation of the information model of the building object of cultural heritage including a full complex of information on an object is offered: historical and archival, legal, technical, administrative, etc. The 3D model of an object of cultural heritage with color marking of elements on degree of wear and a first priority of carrying out repair will become one of components of model. This model will allow to estimate visually technical condition of the building in general and to gain general idea about scales of necessary repair and construction actions that promotes improvement of quality of operation of an object, and also simplifies and accelerates processing of information and in need of a memorial building assessment as subject to investment.

  13. Towards a Component Based Model for Database Systems

    Directory of Open Access Journals (Sweden)

    Octavian Paul ROTARU

    2004-02-01

    Full Text Available Due to their effectiveness in the design and development of software applications and due to their recognized advantages in terms of reusability, Component-Based Software Engineering (CBSE concepts have been arousing a great deal of interest in recent years. This paper presents and extends a component-based approach to object-oriented database systems (OODB introduced by us in [1] and [2]. Components are proposed as a new abstraction level for database system, logical partitions of the schema. In this context, the scope is introduced as an escalated property for transactions. Components are studied from the integrity, consistency, and concurrency control perspective. The main benefits of our proposed component model for OODB are the reusability of the database design, including the access statistics required for a proper query optimization, and a smooth information exchange. The integration of crosscutting concerns into the component database model using aspect-oriented techniques is also discussed. One of the main goals is to define a method for the assessment of component composition capabilities. These capabilities are restricted by the component’s interface and measured in terms of adaptability, degree of compose-ability and acceptability level. The above-mentioned metrics are extended from database components to generic software components. This paper extends and consolidates into one common view the ideas previously presented by us in [1, 2, 3].[1] Octavian Paul Rotaru, Marian Dobre, Component Aspects in Object Oriented Databases, Proceedings of the International Conference on Software Engineering Research and Practice (SERP’04, Volume II, ISBN 1-932415-29-7, pages 719-725, Las Vegas, NV, USA, June 2004.[2] Octavian Paul Rotaru, Marian Dobre, Mircea Petrescu, Integrity and Consistency Aspects in Component-Oriented Databases, Proceedings of the International Symposium on Innovation in Information and Communication Technology (ISIICT

  14. The main objectives of lifetime management of reactor unit components

    International Nuclear Information System (INIS)

    Dragunov, Y.; Kurakov, Y.

    1998-01-01

    The main objectives of the work concerned with life management of reactor components in Russian Federation are as follows: development of regulations in the field of NPP components ageing and lifetime management; investigations of ageing processes; residual life evaluation taking into account the actual state of NPP systems, real loading conditions and number of load cycles, results of in-service inspections; development and implementation of measures for maintaining/enhancing the NPP safety

  15. Feedback loops and temporal misalignment in component-based hydrologic modeling

    Science.gov (United States)

    Elag, Mostafa M.; Goodall, Jonathan L.; Castronova, Anthony M.

    2011-12-01

    In component-based modeling, a complex system is represented as a series of loosely integrated components with defined interfaces and data exchanges that allow the components to be coupled together through shared boundary conditions. Although the component-based paradigm is commonly used in software engineering, it has only recently been applied for modeling hydrologic and earth systems. As a result, research is needed to test and verify the applicability of the approach for modeling hydrologic systems. The objective of this work was therefore to investigate two aspects of using component-based software architecture for hydrologic modeling: (1) simulation of feedback loops between components that share a boundary condition and (2) data transfers between temporally misaligned model components. We investigated these topics using a simple case study where diffusion of mass is modeled across a water-sediment interface. We simulated the multimedia system using two model components, one for the water and one for the sediment, coupled using the Open Modeling Interface (OpenMI) standard. The results were compared with a more conventional numerical approach for solving the system where the domain is represented by a single multidimensional array. Results showed that the component-based approach was able to produce the same results obtained with the more conventional numerical approach. When the two components were temporally misaligned, we explored the use of different interpolation schemes to minimize mass balance error within the coupled system. The outcome of this work provides evidence that component-based modeling can be used to simulate complicated feedback loops between systems and guidance as to how different interpolation schemes minimize mass balance error introduced when components are temporally misaligned.

  16. Reliability analysis of nuclear component cooling water system using semi-Markov process model

    International Nuclear Information System (INIS)

    Veeramany, Arun; Pandey, Mahesh D.

    2011-01-01

    Research highlights: → Semi-Markov process (SMP) model is used to evaluate system failure probability of the nuclear component cooling water (NCCW) system. → SMP is used because it can solve reliability block diagram with a mixture of redundant repairable and non-repairable components. → The primary objective is to demonstrate that SMP can consider Weibull failure time distribution for components while a Markov model cannot → Result: the variability in component failure time is directly proportional to the NCCW system failure probability. → The result can be utilized as an initiating event probability in probabilistic safety assessment projects. - Abstract: A reliability analysis of nuclear component cooling water (NCCW) system is carried out. Semi-Markov process model is used in the analysis because it has potential to solve a reliability block diagram with a mixture of repairable and non-repairable components. With Markov models it is only possible to assume an exponential profile for component failure times. An advantage of the proposed model is the ability to assume Weibull distribution for the failure time of components. In an attempt to reduce the number of states in the model, it is shown that usage of poly-Weibull distribution arises. The objective of the paper is to determine system failure probability under these assumptions. Monte Carlo simulation is used to validate the model result. This result can be utilized as an initiating event probability in probabilistic safety assessment projects.

  17. PCA: Principal Component Analysis for spectra modeling

    Science.gov (United States)

    Hurley, Peter D.; Oliver, Seb; Farrah, Duncan; Wang, Lingyu; Efstathiou, Andreas

    2012-07-01

    The mid-infrared spectra of ultraluminous infrared galaxies (ULIRGs) contain a variety of spectral features that can be used as diagnostics to characterize the spectra. However, such diagnostics are biased by our prior prejudices on the origin of the features. Moreover, by using only part of the spectrum they do not utilize the full information content of the spectra. Blind statistical techniques such as principal component analysis (PCA) consider the whole spectrum, find correlated features and separate them out into distinct components. This code, written in IDL, classifies principal components of IRS spectra to define a new classification scheme using 5D Gaussian mixtures modelling. The five PCs and average spectra for the four classifications to classify objects are made available with the code.

  18. GPR Detection of Buried Symmetrically Shaped Mine-like Objects using Selective Independent Component Analysis

    DEFF Research Database (Denmark)

    Karlsen, Brian; Sørensen, Helge Bjarup Dissing; Larsen, Jan

    2003-01-01

    from small-scale anti-personal (AP) mines to large-scale anti-tank (AT) mines were designed. Large-scale SF-GPR measurements on this series of mine-like objects buried in soil were performed. The SF-GPR data was acquired using a wideband monostatic bow-tie antenna operating in the frequency range 750......This paper addresses the detection of mine-like objects in stepped-frequency ground penetrating radar (SF-GPR) data as a function of object size, object content, and burial depth. The detection approach is based on a Selective Independent Component Analysis (SICA). SICA provides an automatic...... ranking of components, which enables the suppression of clutter, hence extraction of components carrying mine information. The goal of the investigation is to evaluate various time and frequency domain ICA approaches based on SICA. Performance comparison is based on a series of mine-like objects ranging...

  19. An object-oriented framework for magnetic-fusion modeling and analysis codes

    International Nuclear Information System (INIS)

    Cohen, R H; Yang, T Y Brian.

    1999-01-01

    The magnetic-fusion energy (MFE) program, like many other scientific and engineering activities, has a need to efficiently develop complex modeling codes which combine detailed models of components to make an integrated model of a device, as well as a rich supply of legacy code that could provide the component models. There is also growing recognition in many technical fields of the desirability of steerable software: computer programs whose functionality can be changed by the user as it is run. This project had as its goals the development of two key pieces of infrastructure that are needed to combine existing code modules, written mainly in Fortran, into flexible, steerable, object-oriented integrated modeling codes for magnetic- fusion applications. These two pieces are (1) a set of tools to facilitate the interfacing of Fortran code with a steerable object-oriented framework (which we have chosen to be based on PythonlW3, an object-oriented interpreted language), and (2) a skeleton for the integrated modeling code which defines the relationships between the modules. The first of these activities obviously has immediate applicability to a spectrum of projects; the second is more focussed on the MFE application, but may be of value as an example for other applications

  20. Heterogeneous Deformable Modeling of Bio-Tissues and Haptic Force Rendering for Bio-Object Modeling

    Science.gov (United States)

    Lin, Shiyong; Lee, Yuan-Shin; Narayan, Roger J.

    This paper presents a novel technique for modeling soft biological tissues as well as the development of an innovative interface for bio-manufacturing and medical applications. Heterogeneous deformable models may be used to represent the actual internal structures of deformable biological objects, which possess multiple components and nonuniform material properties. Both heterogeneous deformable object modeling and accurate haptic rendering can greatly enhance the realism and fidelity of virtual reality environments. In this paper, a tri-ray node snapping algorithm is proposed to generate a volumetric heterogeneous deformable model from a set of object interface surfaces between different materials. A constrained local static integration method is presented for simulating deformation and accurate force feedback based on the material properties of a heterogeneous structure. Biological soft tissue modeling is used as an example to demonstrate the proposed techniques. By integrating the heterogeneous deformable model into a virtual environment, users can both observe different materials inside a deformable object as well as interact with it by touching the deformable object using a haptic device. The presented techniques can be used for surgical simulation, bio-product design, bio-manufacturing, and medical applications.

  1. Objective models of compressed breast shapes undergoing mammography

    Science.gov (United States)

    Feng, Steve Si Jia; Patel, Bhavika; Sechopoulos, Ioannis

    2013-01-01

    Purpose: To develop models of compressed breasts undergoing mammography based on objective analysis, that are capable of accurately representing breast shapes in acquired clinical images and generating new, clinically realistic shapes. Methods: An automated edge detection algorithm was used to catalogue the breast shapes of clinically acquired cranio-caudal (CC) and medio-lateral oblique (MLO) view mammograms from a large database of digital mammography images. Principal component analysis (PCA) was performed on these shapes to reduce the information contained within the shapes to a small number of linearly independent variables. The breast shape models, one of each view, were developed from the identified principal components, and their ability to reproduce the shape of breasts from an independent set of mammograms not used in the PCA, was assessed both visually and quantitatively by calculating the average distance error (ADE). Results: The PCA breast shape models of the CC and MLO mammographic views based on six principal components, in which 99.2% and 98.0%, respectively, of the total variance of the dataset is contained, were found to be able to reproduce breast shapes with strong fidelity (CC view mean ADE = 0.90 mm, MLO view mean ADE = 1.43 mm) and to generate new clinically realistic shapes. The PCA models based on fewer principal components were also successful, but to a lesser degree, as the two-component model exhibited a mean ADE = 2.99 mm for the CC view, and a mean ADE = 4.63 mm for the MLO view. The four-component models exhibited a mean ADE = 1.47 mm for the CC view and a mean ADE = 2.14 mm for the MLO view. Paired t-tests of the ADE values of each image between models showed that these differences were statistically significant (max p-value = 0.0247). Visual examination of modeled breast shapes confirmed these results. Histograms of the PCA parameters associated with the six principal components were fitted with Gaussian distributions. The six-component

  2. Objective models of compressed breast shapes undergoing mammography

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Steve Si Jia [Department of Biomedical Engineering, Georgia Institute of Technology and Emory University and Department of Radiology and Imaging Sciences, Emory University, 1701 Uppergate Drive Northeast, Suite 5018, Atlanta, Georgia 30322 (United States); Patel, Bhavika [Department of Radiology and Imaging Sciences, Emory University, 1701 Uppergate Drive Northeast, Suite 5018, Atlanta, Georgia 30322 (United States); Sechopoulos, Ioannis [Departments of Radiology and Imaging Sciences, Hematology and Medical Oncology and Winship Cancer Institute, Emory University, 1701 Uppergate Drive Northeast, Suite 5018, Atlanta, Georgia 30322 (United States)

    2013-03-15

    Purpose: To develop models of compressed breasts undergoing mammography based on objective analysis, that are capable of accurately representing breast shapes in acquired clinical images and generating new, clinically realistic shapes. Methods: An automated edge detection algorithm was used to catalogue the breast shapes of clinically acquired cranio-caudal (CC) and medio-lateral oblique (MLO) view mammograms from a large database of digital mammography images. Principal component analysis (PCA) was performed on these shapes to reduce the information contained within the shapes to a small number of linearly independent variables. The breast shape models, one of each view, were developed from the identified principal components, and their ability to reproduce the shape of breasts from an independent set of mammograms not used in the PCA, was assessed both visually and quantitatively by calculating the average distance error (ADE). Results: The PCA breast shape models of the CC and MLO mammographic views based on six principal components, in which 99.2% and 98.0%, respectively, of the total variance of the dataset is contained, were found to be able to reproduce breast shapes with strong fidelity (CC view mean ADE = 0.90 mm, MLO view mean ADE = 1.43 mm) and to generate new clinically realistic shapes. The PCA models based on fewer principal components were also successful, but to a lesser degree, as the two-component model exhibited a mean ADE = 2.99 mm for the CC view, and a mean ADE = 4.63 mm for the MLO view. The four-component models exhibited a mean ADE = 1.47 mm for the CC view and a mean ADE = 2.14 mm for the MLO view. Paired t-tests of the ADE values of each image between models showed that these differences were statistically significant (max p-value = 0.0247). Visual examination of modeled breast shapes confirmed these results. Histograms of the PCA parameters associated with the six principal components were fitted with Gaussian distributions. The six-component

  3. Objective models of compressed breast shapes undergoing mammography

    International Nuclear Information System (INIS)

    Feng, Steve Si Jia; Patel, Bhavika; Sechopoulos, Ioannis

    2013-01-01

    Purpose: To develop models of compressed breasts undergoing mammography based on objective analysis, that are capable of accurately representing breast shapes in acquired clinical images and generating new, clinically realistic shapes. Methods: An automated edge detection algorithm was used to catalogue the breast shapes of clinically acquired cranio-caudal (CC) and medio-lateral oblique (MLO) view mammograms from a large database of digital mammography images. Principal component analysis (PCA) was performed on these shapes to reduce the information contained within the shapes to a small number of linearly independent variables. The breast shape models, one of each view, were developed from the identified principal components, and their ability to reproduce the shape of breasts from an independent set of mammograms not used in the PCA, was assessed both visually and quantitatively by calculating the average distance error (ADE). Results: The PCA breast shape models of the CC and MLO mammographic views based on six principal components, in which 99.2% and 98.0%, respectively, of the total variance of the dataset is contained, were found to be able to reproduce breast shapes with strong fidelity (CC view mean ADE = 0.90 mm, MLO view mean ADE = 1.43 mm) and to generate new clinically realistic shapes. The PCA models based on fewer principal components were also successful, but to a lesser degree, as the two-component model exhibited a mean ADE = 2.99 mm for the CC view, and a mean ADE = 4.63 mm for the MLO view. The four-component models exhibited a mean ADE = 1.47 mm for the CC view and a mean ADE = 2.14 mm for the MLO view. Paired t-tests of the ADE values of each image between models showed that these differences were statistically significant (max p-value = 0.0247). Visual examination of modeled breast shapes confirmed these results. Histograms of the PCA parameters associated with the six principal components were fitted with Gaussian distributions. The six-component

  4. Experimental Object-Oriented Modelling

    DEFF Research Database (Denmark)

    Hansen, Klaus Marius

    through, e.g., technical prototyping and active user involvement. We introduce and examine “experimental object-oriented modelling” as the intersection of these practices. The contributions of this thesis are expected to be within three perspectives on models and modelling in experimental system...... development: Grounding We develop an empirically based conceptualization of modelling and use of models in system development projects characterized by a high degree of uncertainty in requirements and point to implications for tools and techniques for modelling in such a setting. Techniques We introduce......This thesis examines object-oriented modelling in experimental system development. Object-oriented modelling aims at representing concepts and phenomena of a problem domain in terms of classes and objects. Experimental system development seeks active experimentation in a system development project...

  5. A theory of distributed objects asynchrony, mobility, groups, components

    CERN Document Server

    Caromel, Denis; Henrio, Ludovic

    2005-01-01

    Distributed and communicating objects are becoming ubiquitous. In global, Grid and Peer-to-Peer computing environments, extensive use is made of objects interacting through method calls. So far, no general formalism has been proposed for the foundation of such systems. Caromel and Henrio are the first to define a calculus for distributed objects interacting using asynchronous method calls with generalized futures, i.e., wait-by-necessity -- a must in large-scale systems, providing both high structuring and low coupling, and thus scalability. The authors provide very generic results on expressiveness and determinism, and the potential of their approach is further demonstrated by its capacity to cope with advanced issues such as mobility, groups, and components. Researchers and graduate students will find here an extensive review of concurrent languages and calculi, with comprehensive figures and summaries. Developers of distributed systems can adopt the many implementation strategies that are presented and ana...

  6. The IRMIS object model and services API

    International Nuclear Information System (INIS)

    Saunders, C.; Dohan, D.A.; Arnold, N.D.

    2005-01-01

    The relational model developed for the Integrated Relational Model of Installed Systems (IRMIS) toolkit has been successfully used to capture the Advanced Photon Source (APS) control system software (EPICS process variables and their definitions). The relational tables are populated by a crawler script that parses each Input/Output Controller (IOC) start-up file when an IOC reboot is detected. User interaction is provided by a Java Swing application that acts as a desktop for viewing the process variable information. Mapping between the display objects and the relational tables was carried out with the Hibernate Object Relational Modeling (ORM) framework. Work is well underway at the APS to extend the relational modeling to include control system hardware. For this work, due in part to the complex user interaction required, the primary application development environment has shifted from the relational database view to the object oriented (Java) perspective. With this approach, the business logic is executed in Java rather than in SQL stored procedures. This paper describes the object model used to represent control system software, hardware, and interconnects in IRMIS. We also describe the services API used to encapsulate the required behaviors for creating and maintaining the complex data. In addition to the core schema and object model, many important concepts in IRMIS are captured by the services API. IRMIS is an ambitious collaborative effort for defining and developing a relational database and associated applications to comprehensively document the large and complex EPICS-based control systems of today's accelerators. The documentation effort includes process variables, control system hardware, and interconnections. The approach could also be used to document all components of the accelerator, including mechanical, vacuum, power supplies, etc. One key aspect of IRMIS is that it is a documentation framework, not a design and development tool. We do not

  7. Component Reification in Systems Modelling

    DEFF Research Database (Denmark)

    Bendisposto, Jens; Hallerstede, Stefan

    When modelling concurrent or distributed systems in Event-B, we often obtain models where the structure of the connected components is specified by constants. Their behaviour is specified by the non-deterministic choice of event parameters for events that operate on shared variables. From a certain......? These components may still refer to shared variables. Events of these components should not refer to the constants specifying the structure. The non-deterministic choice between these components should not be via parameters. We say the components are reified. We need to address how the reified components get...... reflected into the original model. This reflection should indicate the constraints on how to connect the components....

  8. LEGOS: Object-based software components for mission-critical systems. Final report, June 1, 1995--December 31, 1997

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-08-01

    An estimated 85% of the installed base of software is a custom application with a production quantity of one. In practice, almost 100% of military software systems are custom software. Paradoxically, the marginal costs of producing additional units are near zero. So why hasn`t the software market, a market with high design costs and low productions costs evolved like other similar custom widget industries, such as automobiles and hardware chips? The military software industry seems immune to market pressures that have motivated a multilevel supply chain structure in other widget industries: design cost recovery, improve quality through specialization, and enable rapid assembly from purchased components. The primary goal of the ComponentWare Consortium (CWC) technology plan was to overcome barriers to building and deploying mission-critical information systems by using verified, reusable software components (Component Ware). The adoption of the ComponentWare infrastructure is predicated upon a critical mass of the leading platform vendors` inevitable adoption of adopting emerging, object-based, distributed computing frameworks--initially CORBA and COM/OLE. The long-range goal of this work is to build and deploy military systems from verified reusable architectures. The promise of component-based applications is to enable developers to snap together new applications by mixing and matching prefabricated software components. A key result of this effort is the concept of reusable software architectures. A second important contribution is the notion that a software architecture is something that can be captured in a formal language and reused across multiple applications. The formalization and reuse of software architectures provide major cost and schedule improvements. The Unified Modeling Language (UML) is fast becoming the industry standard for object-oriented analysis and design notation for object-based systems. However, the lack of a standard real-time distributed

  9. Whole object surface area and volume of partial-view 3D models

    International Nuclear Information System (INIS)

    Mulukutla, Gopal K; Proussevitch, Alexander A; Genareau, Kimberly D; Durant, Adam J

    2017-01-01

    Micro-scale 3D models, important components of many studies in science and engineering, are often used to determine morphological characteristics such as shape, surface area and volume. The application of techniques such as stereoscopic scanning electron microscopy on whole objects often results in ‘partial-view’ models with a portion of object not within the field of view thus not captured in the 3D model. The nature and extent of the surface not captured is dependent on the complex interaction of imaging system attributes (e.g. working distance, viewing angle) with object size, shape and morphology. As a result, any simplistic assumptions in estimating whole object surface area or volume can lead to significant errors. In this study, we report on a novel technique to estimate the physical fraction of an object captured in a partial-view 3D model of an otherwise whole object. This allows a more accurate estimate of surface area and volume. Using 3D models, we demonstrate the robustness of this method and the accuracy of surface area and volume estimates relative to true values. (paper)

  10. NASA JPL Distributed Systems Technology (DST) Object-Oriented Component Approach for Software Inter-Operability and Reuse

    Science.gov (United States)

    Hall, Laverne; Hung, Chaw-Kwei; Lin, Imin

    2000-01-01

    The purpose of this paper is to provide a description of NASA JPL Distributed Systems Technology (DST) Section's object-oriented component approach to open inter-operable systems software development and software reuse. It will address what is meant by the terminology object component software, give an overview of the component-based development approach and how it relates to infrastructure support of software architectures and promotes reuse, enumerate on the benefits of this approach, and give examples of application prototypes demonstrating its usage and advantages. Utilization of the object-oriented component technology approach for system development and software reuse will apply to several areas within JPL, and possibly across other NASA Centers.

  11. Single-objective vs. multi-objective autocalibration in modelling total suspended solids and phosphorus in a small agricultural watershed with SWAT.

    Science.gov (United States)

    Rasolomanana, Santatriniaina Denise; Lessard, Paul; Vanrolleghem, Peter A

    2012-01-01

    To obtain greater precision in modelling small agricultural watersheds, a shorter simulation time step is beneficial. A daily time step better represents the dynamics of pollutants in the river and provides more realistic simulation results. However, with a daily evaluation performance, good fits are rarely obtained. With the Shuffled Complex Evolution (SCE) method embedded in the Soil and Water Assessment Tool (SWAT), two calibration approaches are available, single-objective or multi-objective optimization. The goal of the present study is to evaluate which approach can improve the daily performance with SWAT, in modelling flow (Q), total suspended solids (TSS) and total phosphorus (TP). The influence of weights assigned to the different variables included in the objective function has also been tested. The results showed that: (i) the model performance depends not only on the choice of calibration approach, but essentially on the influential parameters; (ii) the multi-objective calibration estimating at once all parameters related to all measured variables is the best approach to model Q, TSS and TP; (iii) changing weights does not improve model performance; and (iv) with a single-objective optimization, an excellent water quality modelling performance may hide a loss of performance of predicting flows and unbalanced internal model components.

  12. Aerodynamic multi-objective integrated optimization based on principal component analysis

    Directory of Open Access Journals (Sweden)

    Jiangtao HUANG

    2017-08-01

    Full Text Available Based on improved multi-objective particle swarm optimization (MOPSO algorithm with principal component analysis (PCA methodology, an efficient high-dimension multi-objective optimization method is proposed, which, as the purpose of this paper, aims to improve the convergence of Pareto front in multi-objective optimization design. The mathematical efficiency, the physical reasonableness and the reliability in dealing with redundant objectives of PCA are verified by typical DTLZ5 test function and multi-objective correlation analysis of supercritical airfoil, and the proposed method is integrated into aircraft multi-disciplinary design (AMDEsign platform, which contains aerodynamics, stealth and structure weight analysis and optimization module. Then the proposed method is used for the multi-point integrated aerodynamic optimization of a wide-body passenger aircraft, in which the redundant objectives identified by PCA are transformed to optimization constraints, and several design methods are compared. The design results illustrate that the strategy used in this paper is sufficient and multi-point design requirements of the passenger aircraft are reached. The visualization level of non-dominant Pareto set is improved by effectively reducing the dimension without losing the primary feature of the problem.

  13. A Component Analysis of the Impact of Evaluative and Objective Feedback on Performance

    Science.gov (United States)

    Johnson, Douglas A.

    2013-01-01

    Despite the frequency with which performance feedback interventions are used in organizational behavior management, component analyses of such feedback are rare. It has been suggested that evaluation of performance and objective details about performance are two necessary components for performance feedback. The present study was designed to help…

  14. Integration of a Three-Dimensional Process-Based Hydrological Model into the Object Modeling System

    Directory of Open Access Journals (Sweden)

    Giuseppe Formetta

    2016-01-01

    Full Text Available The integration of a spatial process model into an environmental modeling framework can enhance the model’s capabilities. This paper describes a general methodology for integrating environmental models into the Object Modeling System (OMS regardless of the model’s complexity, the programming language, and the operating system used. We present the integration of the GEOtop model into the OMS version 3.0 and illustrate its application in a small watershed. OMS is an environmental modeling framework that facilitates model development, calibration, evaluation, and maintenance. It provides innovative techniques in software design such as multithreading, implicit parallelism, calibration and sensitivity analysis algorithms, and cloud-services. GEOtop is a physically based, spatially distributed rainfall-runoff model that performs three-dimensional finite volume calculations of water and energy budgets. Executing GEOtop as an OMS model component allows it to: (1 interact directly with the open-source geographical information system (GIS uDig-JGrass to access geo-processing, visualization, and other modeling components; and (2 use OMS components for automatic calibration, sensitivity analysis, or meteorological data interpolation. A case study of the model in a semi-arid agricultural catchment is presented for illustration and proof-of-concept. Simulated soil water content and soil temperature results are compared with measured data, and model performance is evaluated using goodness-of-fit indices. This study serves as a template for future integration of process models into OMS.

  15. Component Composition Using Feature Models

    DEFF Research Database (Denmark)

    Eichberg, Michael; Klose, Karl; Mitschke, Ralf

    2010-01-01

    interface description languages. If this variability is relevant when selecting a matching component then human interaction is required to decide which components can be bound. We propose to use feature models for making this variability explicit and (re-)enabling automatic component binding. In our...... approach, feature models are one part of service specifications. This enables to declaratively specify which service variant is provided by a component. By referring to a service's variation points, a component that requires a specific service can list the requirements on the desired variant. Using...... these specifications, a component environment can then determine if a binding of the components exists that satisfies all requirements. The prototypical environment Columbus demonstrates the feasibility of the approach....

  16. How to constrain multi-objective calibrations using water balance components for an improved realism of model results

    Science.gov (United States)

    Accurate discharge simulation is one of the most common objectives of hydrological modeling studies. However, a good simulation of discharge is not necessarily the result of a realistic simulation of hydrological processes within the catchment. To enhance the realism of model results, we propose an ...

  17. Object Oriented Toolbox for Modelling and Simulation of Dynamical Systems

    DEFF Research Database (Denmark)

    Poulsen, Mikael Zebbelin; Wagner, Falko Jens; Thomsen, Per Grove

    1998-01-01

    This paper presents the results of an ongoing project, dealing with design and implementation of a simulation toolbox based on object oriented modelling techniques. The paper describes an experimental implementation of parts of such a toolbox in C++, and discusses the experiences drawn from that ...... that process. Essential to the work is the focus on simulation of complex dynamical systems, from modelling the single components/subsystems to building complete systemssuch a toolbox in C++, and discusses the experiences drawn from that process....

  18. New test of bow-shock models of Herbig-Haro objects

    International Nuclear Information System (INIS)

    Raga, A.C.; Bohm, K.H.; Solf, J.; Max-Planck-Institut fuer Astronomie, Heidelberg, West Germany)

    1986-01-01

    Long-slit, high-resolution spectroscopy of the Herbig-Haro oject HH 32 has shown that the emission-line profiles in all four condensations A, B, C, and D show high- and low-velocity components. The spatial maxima of these two components are always arranged in a double-layer pattern, with the maximum of the high-velocity component 0.6-1.0 arcsecs closer to the central star (AS 353A) than the low-velocity maximum. A study of the emission-line profiles predicted from a model of a radiating bow shock shows that such a double-layer structure appears naturally for this type of flow. In this case both the high-velocity and the low-velocity components come from the post-shock gas, in agreement with the theoretical prediction that it should be very difficult to detect the pre-shock gas observationally. The present results agree qualitatively well with observations of HH 32, strengthening the case for a bow-shock interpretation of this Herbig-Haro object. It is shown that the double-layer effect will be more easily observable for bow shocks which move at a relatively large angle with respect to the plane of the sky (i.e., for Herbig-Haro objects which have large radial velocities). 31 references

  19. Developing a Model Component

    Science.gov (United States)

    Fields, Christina M.

    2013-01-01

    The Spaceport Command and Control System (SCCS) Simulation Computer Software Configuration Item (CSCI) is responsible for providing simulations to support test and verification of SCCS hardware and software. The Universal Coolant Transporter System (UCTS) was a Space Shuttle Orbiter support piece of the Ground Servicing Equipment (GSE). The initial purpose of the UCTS was to provide two support services to the Space Shuttle Orbiter immediately after landing at the Shuttle Landing Facility. The UCTS is designed with the capability of servicing future space vehicles; including all Space Station Requirements necessary for the MPLM Modules. The Simulation uses GSE Models to stand in for the actual systems to support testing of SCCS systems during their development. As an intern at Kennedy Space Center (KSC), my assignment was to develop a model component for the UCTS. I was given a fluid component (dryer) to model in Simulink. I completed training for UNIX and Simulink. The dryer is a Catch All replaceable core type filter-dryer. The filter-dryer provides maximum protection for the thermostatic expansion valve and solenoid valve from dirt that may be in the system. The filter-dryer also protects the valves from freezing up. I researched fluid dynamics to understand the function of my component. The filter-dryer was modeled by determining affects it has on the pressure and velocity of the system. I used Bernoulli's Equation to calculate the pressure and velocity differential through the dryer. I created my filter-dryer model in Simulink and wrote the test script to test the component. I completed component testing and captured test data. The finalized model was sent for peer review for any improvements. I participated in Simulation meetings and was involved in the subsystem design process and team collaborations. I gained valuable work experience and insight into a career path as an engineer.

  20. Advertising Model of Residential Real Estate Object in Lithuania

    Directory of Open Access Journals (Sweden)

    Jelena Mazaj

    2012-07-01

    Full Text Available Since the year 2000, during the period of economic growth, the real estate market has been rapidly expanding. During this period advertising of real estate objects was implemented using one set of similar channels (press advertising, Internet advertising, leaflets with contact information of real estate agents and others, however the start of the economic recession has intensified the competition in the market and forced companies to search for new advertising means or to diversify the advertising package. The article presents real estate property, as a product, one of the marketing components – including advertising, conclusions and suggestions based on conducted surveys and a model for advertising the residential real estate objects.Article in Lithuanian

  1. Integrated production planning and control: A multi-objective optimization model

    Directory of Open Access Journals (Sweden)

    Cheng Wang

    2013-09-01

    Full Text Available Purpose: Production planning and control has crucial impact on the production and business activities of enterprise. Enterprise Resource Planning (ERP is the most popular resources planning and management system, however there are some shortcomings and deficiencies in the production planning and control because its core component is still the Material Requirements Planning (MRP. For the defects of ERP system, many local improvement and optimization schemes have been proposed, and improve the feasibility and practicality of the plan in some extent, but study considering the whole planning system optimization in the multiple performance management objectives and achieving better application performance is less. The purpose of this paper is to propose a multi-objective production planning optimization model Based on the point of view of the integration of production planning and control, in order to achieve optimization and control of enterprise manufacturing management. Design/methodology/approach: On the analysis of ERP planning system’s defects and disadvantages, and related research and literature, a multi-objective production planning optimization model is proposed, in addition to net demand and capacity, multiple performance management objectives, such as on-time delivery, production balance, inventory, overtime production, are considered incorporating into the examination scope of the model, so that the manufacturing process could be management and controlled Optimally between multiple objectives. The validity and practicability of the model will be verified by the instance in the last part of the paper. Findings: The main finding is that production planning management of manufacturing enterprise considers not only the capacity and materials, but also a variety of performance management objectives in the production process, and building a multi-objective optimization model can effectively optimize the management and control of enterprise

  2. Extended object-oriented Petri net model for mission reliability simulation of repairable PMS with common cause failures

    International Nuclear Information System (INIS)

    Wu, Xin-yang; Wu, Xiao-Yue

    2015-01-01

    Phased Mission Systems (PMS) have several phases with different success criteria. Generally, traditional analytical methods need to make some assumptions when they are applied for reliability evaluation and analysis of complex PMS, for example, the components are non-repairable or components are not subjected to common cause failures (CCF). However, the evaluation and analysis results may be inapplicable when the assumptions do not agree with practical situation. In this article, we propose an extended object-oriented Petri net (EOOPN) model for mission reliability simulation of repairable PMS with CCFs. Based on object-oriented Petri net (OOPN), EOOPN defines four reusable sub-models to depict PMS at system, phase, or component levels respectively, logic transitions to depict complex components reliability logics in a more readable form, and broadcast place to transmit shared information among components synchronously. After extension, EOOPN could deal with repairable PMS with both external and internal CCFs conveniently. The mission reliability modelling, simulation and analysis using EOOPN are illustrated by a PMS example. The results demonstrate that the proposed EOOPN model is effective. - Highlights: • EOOPN model was effective in reliability simulation for repairable PMS with CCFs. • EOOPN had modular and hierarchical structure. • New elements of EOOPN made the modelling process more convenient and friendlier. • EOOPN had better model reusability and readability than other PNs

  3. Generalized structured component analysis a component-based approach to structural equation modeling

    CERN Document Server

    Hwang, Heungsun

    2014-01-01

    Winner of the 2015 Sugiyama Meiko Award (Publication Award) of the Behaviormetric Society of Japan Developed by the authors, generalized structured component analysis is an alternative to two longstanding approaches to structural equation modeling: covariance structure analysis and partial least squares path modeling. Generalized structured component analysis allows researchers to evaluate the adequacy of a model as a whole, compare a model to alternative specifications, and conduct complex analyses in a straightforward manner. Generalized Structured Component Analysis: A Component-Based Approach to Structural Equation Modeling provides a detailed account of this novel statistical methodology and its various extensions. The authors present the theoretical underpinnings of generalized structured component analysis and demonstrate how it can be applied to various empirical examples. The book enables quantitative methodologists, applied researchers, and practitioners to grasp the basic concepts behind this new a...

  4. Multi-band morpho-Spectral Component Analysis Deblending Tool (MuSCADeT): Deblending colourful objects

    Science.gov (United States)

    Joseph, R.; Courbin, F.; Starck, J.-L.

    2016-05-01

    We introduce a new algorithm for colour separation and deblending of multi-band astronomical images called MuSCADeT which is based on Morpho-spectral Component Analysis of multi-band images. The MuSCADeT algorithm takes advantage of the sparsity of astronomical objects in morphological dictionaries such as wavelets and their differences in spectral energy distribution (SED) across multi-band observations. This allows us to devise a model independent and automated approach to separate objects with different colours. We show with simulations that we are able to separate highly blended objects and that our algorithm is robust against SED variations of objects across the field of view. To confront our algorithm with real data, we use HST images of the strong lensing galaxy cluster MACS J1149+2223 and we show that MuSCADeT performs better than traditional profile-fitting techniques in deblending the foreground lensing galaxies from background lensed galaxies. Although the main driver for our work is the deblending of strong gravitational lenses, our method is fit to be used for any purpose related to deblending of objects in astronomical images. An example of such an application is the separation of the red and blue stellar populations of a spiral galaxy in the galaxy cluster Abell 2744. We provide a python package along with all simulations and routines used in this paper to contribute to reproducible research efforts. Codes can be found at http://lastro.epfl.ch/page-126973.html

  5. Three-Component Forward Modeling for Transient Electromagnetic Method

    Directory of Open Access Journals (Sweden)

    Bin Xiong

    2010-01-01

    Full Text Available In general, the time derivative of vertical magnetic field is considered only in the data interpretation of transient electromagnetic (TEM method. However, to survey in the complex geology structures, this conventional technique has begun gradually to be unsatisfied with the demand of field exploration. To improve the integrated interpretation precision of TEM, it is necessary to study the three-component forward modeling and inversion. In this paper, a three-component forward algorithm for 2.5D TEM based on the independent electric and magnetic field has been developed. The main advantage of the new scheme is that it can reduce the size of the global system matrix to the utmost extent, that is to say, the present is only one fourth of the conventional algorithm. In order to illustrate the feasibility and usefulness of the present algorithm, several typical geoelectric models of the TEM responses produced by loop sources at air-earth interface are presented. The results of the numerical experiments show that the computation speed of the present scheme is increased obviously and three-component interpretation can get the most out of the collected data, from which we can easily analyze or interpret the space characteristic of the abnormity object more comprehensively.

  6. Creation of 'Ukrytie' objects computer model

    International Nuclear Information System (INIS)

    Mazur, A.B.; Kotlyarov, V.T.; Ermolenko, A.I.; Podbereznyj, S.S.; Postil, S.D.; Shaptala, D.V.

    1999-01-01

    A partial computer model of the 'Ukrytie' object was created with the use of geoinformation technologies. The computer model makes it possible to carry out information support of the works related to the 'Ukrytie' object stabilization and its conversion into ecologically safe system for analyzing, forecasting and controlling the processes occurring in the 'Ukrytie' object. Elements and structures of the 'Ukryttia' object were designed and input into the model

  7. An object-oriented computational model to study cardiopulmonary hemodynamic interactions in humans.

    Science.gov (United States)

    Ngo, Chuong; Dahlmanns, Stephan; Vollmer, Thomas; Misgeld, Berno; Leonhardt, Steffen

    2018-06-01

    This work introduces an object-oriented computational model to study cardiopulmonary interactions in humans. Modeling was performed in object-oriented programing language Matlab Simscape, where model components are connected with each other through physical connections. Constitutive and phenomenological equations of model elements are implemented based on their non-linear pressure-volume or pressure-flow relationship. The model includes more than 30 physiological compartments, which belong either to the cardiovascular or respiratory system. The model considers non-linear behaviors of veins, pulmonary capillaries, collapsible airways, alveoli, and the chest wall. Model parameters were derisved based on literature values. Model validation was performed by comparing simulation results with clinical and animal data reported in literature. The model is able to provide quantitative values of alveolar, pleural, interstitial, aortic and ventricular pressures, as well as heart and lung volumes during spontaneous breathing and mechanical ventilation. Results of baseline simulation demonstrate the consistency of the assigned parameters. Simulation results during mechanical ventilation with PEEP trials can be directly compared with animal and clinical data given in literature. Object-oriented programming languages can be used to model interconnected systems including model non-linearities. The model provides a useful tool to investigate cardiopulmonary activity during spontaneous breathing and mechanical ventilation. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Stochastic Modeling Of Wind Turbine Drivetrain Components

    DEFF Research Database (Denmark)

    Rafsanjani, Hesam Mirzaei; Sørensen, John Dalsgaard

    2014-01-01

    reliable components are needed for wind turbine. In this paper focus is on reliability of critical components in drivetrain such as bearings and shafts. High failure rates of these components imply a need for more reliable components. To estimate the reliability of these components, stochastic models...... are needed for initial defects and damage accumulation. In this paper, stochastic models are formulated considering some of the failure modes observed in these components. The models are based on theoretical considerations, manufacturing uncertainties, size effects of different scales. It is illustrated how...

  9. Components in models of learning: Different operationalisations and relations between components

    Directory of Open Access Journals (Sweden)

    Mirkov Snežana

    2013-01-01

    Full Text Available This paper provides the presentation of different operationalisations of components in different models of learning. Special emphasis is on the empirical verifications of relations between components. Starting from the research of congruence between learning motives and strategies, underlying the general model of school learning that comprises different approaches to learning, we have analyzed the empirical verifications of factor structure of instruments containing the scales of motives and learning strategies corresponding to these motives. Considering the problems in the conceptualization of the achievement approach to learning, we have discussed the ways of operational sing the goal orientations and exploring their role in using learning strategies, especially within the model of the regulation of constructive learning processes. This model has served as the basis for researching learning styles that are the combination of a large number of components. Complex relations between the components point to the need for further investigation of the constructs involved in various models. We have discussed the findings and implications of the studies of relations between the components involved in different models, especially between learning motives/goals and learning strategies. We have analyzed the role of regulation in the learning process, whose elaboration, as indicated by empirical findings, can contribute to a more precise operationalisation of certain learning components. [Projekat Ministarstva nauke Republike Srbije, br. 47008: Unapređivanje kvaliteta i dostupnosti obrazovanja u procesima modernizacije Srbije i br. 179034: Od podsticanja inicijative, saradnje i stvaralaštva u obrazovanju do novih uloga i identiteta u društvu

  10. The Composite OLAP-Object Data Model

    Energy Technology Data Exchange (ETDEWEB)

    Pourabbas, Elaheh; Shoshani, Arie

    2005-12-07

    In this paper, we define an OLAP-Object model that combines the main characteristics of OLAP and Object data models in order to achieve their functionalities in a common framework. We classify three different object classes: primitive, regular and composite. Then, we define a query language which uses the path concept in order to facilitate data navigation and data manipulation. The main feature of the proposed language is an anchor. It allows us to fix dynamically an object class (primitive, regular or composite) along the paths over the OLAP-Object data model for expressing queries. The queries can be formulated on objects, composite objects and combination of both. The power of the proposed query language is investigated through multiple query examples. The semantic of different clauses and syntax of the proposed language are investigated.

  11. Modeling the degradation of nuclear components

    International Nuclear Information System (INIS)

    Stock, D.; Samanta, P.; Vesely, W.

    1993-01-01

    This paper describes component level reliability models that use information on degradation to predict component reliability, and which have been used to evaluate different maintenance and testing policies. The models are based on continuous time Markov processes, and are a generalization of reliability models currently used in Probabilistic Risk Assessment. An explanation of the models, the model parameters, and an example of how these models can be used to evaluate maintenance policies are discussed

  12. Exploring object-oriented technologies

    CERN Multimedia

    2000-01-01

    Object oriented technologies are the corner stone of modern software development. A piece of software is today conceived, constructed and tested as a set of objects interacting with each other, rather than as a large sequential program. OO is present throughout the whole software life cycle allowing for maintainable code re-use, clean design and manageable complexity. OO is also the seed upon which other technologies are being built and deployed, such as distributed computing, component models, open interoperability, etc.This series of three seminars will provide a pragmatic overview on the main ideas behind OO software development and will explain the inner workings of the most outstanding technologies being built on OO, such as UML, CORBA, Component Models, Agent Oriented Computing, Business Objects, etc.

  13. A Model for Concurrent Objects

    DEFF Research Database (Denmark)

    Sørensen, Morten U.

    1996-01-01

    We present a model for concurrent objects where obejcts interact by taking part in common events that are closely matched to form call-response pairs, resulting in resulting in rendez-vous like communications. Objects are built from primitive objects by parallel composition, encapsulation...

  14. Identification of a Multicriteria Decision-Making Model Using the Characteristic Objects Method

    Directory of Open Access Journals (Sweden)

    Andrzej Piegat

    2014-01-01

    Full Text Available This paper presents a new, nonlinear, multicriteria, decision-making method: the characteristic objects (COMET. This approach, which can be characterized as a fuzzy reference model, determines a measurement standard for decision-making problems. This model is distinguished by a constant set of specially chosen characteristic objects that are independent of the alternatives. After identifying a multicriteria model, this method can be used to compare any number of decisional objects (alternatives and select the best one. In the COMET, in contrast to other methods, the rank-reversal phenomenon is not observed. Rank-reversal is a paradoxical feature in the decision-making methods, which is caused by determining the absolute evaluations of considered alternatives on the basis of the alternatives themselves. In the Analytic Hierarchy Process (AHP method and similar methods, when a new alternative is added to the original alternative set, the evaluation base and the resulting evaluations of all objects change. A great advantage of the COMET is its ability to identify not only linear but also nonlinear multicriteria models of decision makers. This identification is based not on a ranking of component criteria of the multicriterion but on a ranking of a larger set of characteristic objects (characteristic alternatives that are independent of the small set of alternatives analyzed in a given problem. As a result, the COMET is free of the faults of other methods.

  15. Identification of a putative man-made object from an underwater crash site using CAD model superimposition.

    Science.gov (United States)

    Vincelli, Jay; Calakli, Fatih; Stone, Michael; Forrester, Graham; Mellon, Timothy; Jarrell, John

    2018-04-01

    In order to identify an object in video, a comparison with an exemplar object is typically needed. In this paper, we discuss the methodology used to identify an object detected in underwater video that was recorded during an investigation into Amelia Earhart's purported crash site. A computer aided design (CAD) model of the suspected aircraft component was created based on measurements made from orthogonally rectified images of a reference aircraft, and validated against historical photographs of the subject aircraft prior to the crash. The CAD model was then superimposed on the underwater video, and specific features on the object were geometrically compared between the CAD model and the video. This geometrical comparison was used to assess the goodness of fit between the purported object and the object identified in the underwater video. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Standard object recognition memory and "what" and "where" components: Improvement by post-training epinephrine in highly habituated rats.

    Science.gov (United States)

    Jurado-Berbel, Patricia; Costa-Miserachs, David; Torras-Garcia, Meritxell; Coll-Andreu, Margalida; Portell-Cortés, Isabel

    2010-02-11

    The present work examined whether post-training systemic epinephrine (EPI) is able to modulate short-term (3h) and long-term (24 h and 48 h) memory of standard object recognition, as well as long-term (24 h) memory of separate "what" (object identity) and "where" (object location) components of object recognition. Although object recognition training is associated to low arousal levels, all the animals received habituation to the training box in order to further reduce emotional arousal. Post-training EPI improved long-term (24 h and 48 h), but not short-term (3 h), memory in the standard object recognition task, as well as 24 h memory for both object identity and object location. These data indicate that post-training epinephrine: (1) facilitates long-term memory for standard object recognition; (2) exerts separate facilitatory effects on "what" (object identity) and "where" (object location) components of object recognition; and (3) is capable of improving memory for a low arousing task even in highly habituated rats.

  17. Improvements of an objective model of compressed breasts undergoing mammography: Generation and characterization of breast shapes.

    Science.gov (United States)

    Rodríguez-Ruiz, Alejandro; Feng, Steve Si Jia; van Zelst, Jan; Vreemann, Suzan; Mann, Jessica Rice; D'Orsi, Carl Joseph; Sechopoulos, Ioannis

    2017-06-01

    To develop a set of accurate 2D models of compressed breasts undergoing mammography or breast tomosynthesis, based on objective analysis, to accurately characterize mammograms with few linearly independent parameters, and to generate novel clinically realistic paired cranio-caudal (CC) and medio-lateral oblique (MLO) views of the breast. We seek to improve on an existing model of compressed breasts by overcoming detector size bias, removing the nipple and non-mammary tissue, pairing the CC and MLO views from a single breast, and incorporating the pectoralis major muscle contour into the model. The outer breast shapes in 931 paired CC and MLO mammograms were automatically detected with an in-house developed segmentation algorithm. From these shapes three generic models (CC-only, MLO-only, and joint CC/MLO) with linearly independent components were constructed via principal component analysis (PCA). The ability of the models to represent mammograms not used for PCA was tested via leave-one-out cross-validation, by measuring the average distance error (ADE). The individual models based on six components were found to depict breast shapes with accuracy (mean ADE-CC = 0.81 mm, ADE-MLO = 1.64 mm, ADE-Pectoralis = 1.61 mm), outperforming the joint CC/MLO model (P ≤ 0.001). The joint model based on 12 principal components contains 99.5% of the total variance of the data, and can be used to generate new clinically realistic paired CC and MLO breast shapes. This is achieved by generating random sets of 12 principal components, following the Gaussian distributions of the histograms of each component, which were obtained from the component values determined from the images in the mammography database used. Our joint CC/MLO model can successfully generate paired CC and MLO view shapes of the same simulated breast, while the individual models can be used to represent with high accuracy clinical acquired mammograms with a small set of parameters. This is the first

  18. Piles of objects

    KAUST Repository

    Hsu, Shu-Wei

    2010-01-01

    We present a method for directly modeling piles of objects in multi-body simulations. Piles of objects represent some of the more interesting, but also most time-consuming portion of simulation. We propose a method for reducing computation in many of these situations by explicitly modeling the piles that the objects may form into. By modeling pile behavior rather than the behavior of all individual objects, we can achieve realistic results in less time, and without directly modeling the frictional component that leads to desired pile shapes. Our method is simple to implement and can be easily integrated with existing rigid body simulations. We observe notable speedups in several rigid body examples, and generate a wider variety of piled structures than possible with strict impulse-based simulation. © 2010 ACM.

  19. Formal Model-Driven Engineering: Generating Data and Behavioural Components

    Directory of Open Access Journals (Sweden)

    Chen-Wei Wang

    2012-12-01

    Full Text Available Model-driven engineering is the automatic production of software artefacts from abstract models of structure and functionality. By targeting a specific class of system, it is possible to automate aspects of the development process, using model transformations and code generators that encode domain knowledge and implementation strategies. Using this approach, questions of correctness for a complex, software system may be answered through analysis of abstract models of lower complexity, under the assumption that the transformations and generators employed are themselves correct. This paper shows how formal techniques can be used to establish the correctness of model transformations used in the generation of software components from precise object models. The source language is based upon existing, formal techniques; the target language is the widely-used SQL notation for database programming. Correctness is established by giving comparable, relational semantics to both languages, and checking that the transformations are semantics-preserving.

  20. The artifacts of component-based development

    International Nuclear Information System (INIS)

    Rizwan, M.; Qureshi, J.; Hayat, S.A.

    2007-01-01

    Component based development idea was floated in a conference name Mass Produced Software Components in 1968 (1). Since then engineering and scientific libraries are developed to reuse the previously developed functions. This concept is now widely used in SW development as component based development (CBD). Component-based software engineering (CBSE) is used to develop/ assemble software from existing components (2). Software developed using components is called component where (3). This paper presents different architectures of CBD such as Active X, common object request broker architecture (CORBA), remote method invocation (RMI) and simple object access protocol (SOAP). The overall objective of this paper is to support the practice of CBD by comparing its advantages and disadvantages. This paper also evaluates object oriented process model to adapt it for CBD. (author)

  1. The application of the unified modeling language in object-oriented analysis of healthcare information systems.

    Science.gov (United States)

    Aggarwal, Vinod

    2002-10-01

    This paper concerns itself with the beneficial effects of the Unified Modeling Language (UML), a nonproprietary object modeling standard, in specifying, visualizing, constructing, documenting, and communicating the model of a healthcare information system from the user's perspective. The author outlines the process of object-oriented analysis (OOA) using the UML and illustrates this with healthcare examples to demonstrate the practicality of application of the UML by healthcare personnel to real-world information system problems. The UML will accelerate advanced uses of object-orientation such as reuse technology, resulting in significantly higher software productivity. The UML is also applicable in the context of a component paradigm that promises to enhance the capabilities of healthcare information systems and simplify their management and maintenance.

  2. Positioning graphical objects on computer screens: a three-phase model.

    Science.gov (United States)

    Pastel, Robert

    2011-02-01

    This experiment identifies and models phases during the positioning of graphical objects (called cursors in this article) on computer displays. The human computer-interaction community has traditionally used Fitts' law to model selection in graphical user interfaces, whereas human factors experiments have found the single-component Fitts' law inadequate to model positioning of real objects. Participants (N=145) repeatedly positioned variably sized square cursors within variably sized rectangular targets using computer mice. The times for the cursor to just touch the target, for the cursor to enter the target, and for participants to indicate positioning completion were observed. The positioning tolerances were varied from very precise and difficult to imprecise and easy. The time for the cursor to touch the target was proportional to the initial cursor-target distance. The time for the cursor to completely enter the target after touching was proportional to the logarithms of cursor size divided by target tolerances. The time for participants to indicate positioning after entering was inversely proportional to the tolerance. A three-phase model defined by regions--distant, proximate, and inside the target--was proposed and could model the positioning tasks. The three-phase model provides a framework for ergonomists to evaluate new positioning techniques and can explain their deficiencies. The model provides a means to analyze tasks and enhance interaction during positioning.

  3. A Global Multi-Objective Optimization Tool for Design of Mechatronic Components using Generalized Differential Evolution

    DEFF Research Database (Denmark)

    Bech, Michael Møller; Nørgård, Christian; Roemer, Daniel Beck

    2016-01-01

    This paper illustrates how the relatively simple constrained multi-objective optimization algorithm Generalized Differential Evolution 3 (GDE3), can assist with the practical sizing of mechatronic components used in e.g. digital displacement fluid power machinery. The studied bi- and tri-objectiv......This paper illustrates how the relatively simple constrained multi-objective optimization algorithm Generalized Differential Evolution 3 (GDE3), can assist with the practical sizing of mechatronic components used in e.g. digital displacement fluid power machinery. The studied bi- and tri...... different optimization control parameter settings and it is concluded that GDE3 is a reliable optimization tool that can assist mechatronic engineers in the design and decision making process....

  4. A flexible hydrological modelling system developed using an object oriented methodology

    Energy Technology Data Exchange (ETDEWEB)

    Rinde, Trond

    1998-12-31

    The report presents a software system called Process Integrating Network (PINE). The capabilities, working principles, programming technical design and principles of use of the system are described as are some practical applications. PINE is a simulation tool for modelling of hydrological and hydrologically related phenomena. The system is based on object oriented programming principles and was specially designed to provide freedom in the choice of model structures and algorithms for process descriptions. It supports full freedom with regards to spatial distribution and temporal resolution. Geographical information systems (GIS) may be integrated with PINE in order to provide full spatial distribution in system parametrisation, process simulation and visualisation of simulation results. Simulation models are developed by linking components for process description together in a structure. The system can handle compound working media such as water with chemical or biological constituents. Non-hydrological routines may then be included to describe the responses of such constituents. Features such as extensibility and reuse of program components are emphasised in the program design. Separation between process topology, process descriptions and process data facilitates simple and consistent implementation of components for process description. Such components may be automatically prototyped and their response functions may be implemented without knowledge of other parts of the program system and without the need to program import or export routines or a user interface. Model extension is thus a rapid process that does not require extensive programming skills. Components for process descriptions may further be placed in separate program libraries, which can be included in the program as required. The program system can thus be very compact while it still has a large number of process algorithms available. The system can run on both PC and UNIX platforms. 106 figs., 20

  5. Tweaking the Four-Component Model

    Science.gov (United States)

    Curzer, Howard J.

    2014-01-01

    By maintaining that moral functioning depends upon four components (sensitivity, judgment, motivation, and character), the Neo-Kohlbergian account of moral functioning allows for uneven moral development within individuals. However, I argue that the four-component model does not go far enough. I offer a more accurate account of moral functioning…

  6. Object tracking using active appearance models

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille

    2001-01-01

    This paper demonstrates that (near) real-time object tracking can be accomplished by the deformable template model; the Active Appearance Model (AAM) using only low-cost consumer electronics such as a PC and a web-camera. Successful object tracking of perspective, rotational and translational...

  7. Hybrid PolyLingual Object Model: An Efficient and Seamless Integration of Java and Native Components on the Dalvik Virtual Machine

    OpenAIRE

    Yukun Huang; Rong Chen; Jingbo Wei; Xilong Pei; Jing Cao; Prem Prakash Jayaraman; Rajiv Ranjan

    2014-01-01

    JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant co...

  8. Metacognition of visual short-term memory: Dissociation between objective and subjective components of VSTM

    Directory of Open Access Journals (Sweden)

    Silvia eBona

    2013-02-01

    Full Text Available The relationship between the objective accuracy of visual-short term memory (VSTM representations and their subjective conscious experience is unknown. We investigated this issue by assessing how the objective and subjective components of VSTM in a delayed cue-target orientation discrimination task are affected by intervening distracters. On each trial, participants were shown a memory cue (a grating, the orientation of which they were asked to hold in memory. On approximately half of the trials, a distractor grating appeared during the maintenance interval; its orientation was either identical to that of the memory cue, or it differed by 10 or 40 degrees. The distractors were masked and presented briefly, so they were only consciously perceived on a subset of trials. At the end of the delay period, a memory test probe was presented, and participants were asked to indicate whether it was tilted to the left or right relative to the memory cue (VSTM accuracy; objective performance. In order to assess subjective metacognition, participants were asked indicate the vividness of their memory for the original memory cue. Finally, participants were asked rate their awareness of the distracter. Results showed that objective VSTM performance was impaired by distractors only when the distractors were very different from the cue, and that this occurred with both subjectively visible and invisible distractors. Subjective metacognition, however, was impaired by distractors of all orientations, but only when these distractors were subjectively invisible. Our results thus indicate that the objective and subjective components of VSTM are to some extent dissociable.

  9. Integration of Simulink Models with Component-based Software Models

    DEFF Research Database (Denmark)

    Marian, Nicolae

    2008-01-01

    Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics...... of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical...... constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems) is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI), University of Southern Denmark. Once specified, the software model has...

  10. Design of roundness measurement model with multi-systematic error for cylindrical components with large radius.

    Science.gov (United States)

    Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao

    2016-02-01

    The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius.

  11. Intuitive modeling of vaporish objects

    International Nuclear Information System (INIS)

    Sokolov, Dmitry; Gentil, Christian

    2015-01-01

    Attempts to model gases in computer graphics started in the late 1970s. Since that time, there have been many approaches developed. In this paper we present a non-physical method allowing to create vaporish objects like clouds or smoky characters. The idea is to create a few sketches describing the rough shape of the final vaporish object. These sketches will be used as condensation sets of Iterated Function Systems, providing intuitive control over the object. The advantages of the new method are: simplicity, good control of resulting shapes and ease of eventual object animation.

  12. Development of an object oriented nodal code using the refined AFEN derived from the method of component decomposition

    International Nuclear Information System (INIS)

    Noh, J. M.; Yoo, J. W.; Joo, H. K.

    2004-01-01

    In this study, we invented a method of component decomposition to derive the systematic inter-nodal coupled equations of the refined AFEN method and developed an object oriented nodal code to solve the derived coupled equations. The method of component decomposition decomposes the intra-nodal flux expansion of a nodal method into even and odd components in three dimensions to reduce the large coupled linear system equation into several small single equations. This method requires no additional technique to accelerate the iteration process to solve the inter-nodal coupled equations, since the derived equations can automatically act as the coarse mesh re-balance equations. By utilizing the object oriented programming concepts such as abstraction, encapsulation, inheritance and polymorphism, dynamic memory allocation, and operator overloading, we developed an object oriented nodal code that can facilitate the input/output and the dynamic control of the memories, and can make the maintenance easy. (authors)

  13. Object feature extraction and recognition model

    International Nuclear Information System (INIS)

    Wan Min; Xiang Rujian; Wan Yongxing

    2001-01-01

    The characteristics of objects, especially flying objects, are analyzed, which include characteristics of spectrum, image and motion. Feature extraction is also achieved. To improve the speed of object recognition, a feature database is used to simplify the data in the source database. The feature vs. object relationship maps are stored in the feature database. An object recognition model based on the feature database is presented, and the way to achieve object recognition is also explained

  14. A unified computational model of the development of object unity, object permanence, and occluded object trajectory perception.

    Science.gov (United States)

    Franz, A; Triesch, J

    2010-12-01

    The perception of the unity of objects, their permanence when out of sight, and the ability to perceive continuous object trajectories even during occlusion belong to the first and most important capacities that infants have to acquire. Despite much research a unified model of the development of these abilities is still missing. Here we make an attempt to provide such a unified model. We present a recurrent artificial neural network that learns to predict the motion of stimuli occluding each other and that develops representations of occluded object parts. It represents completely occluded, moving objects for several time steps and successfully predicts their reappearance after occlusion. This framework allows us to account for a broad range of experimental data. Specifically, the model explains how the perception of object unity develops, the role of the width of the occluders, and it also accounts for differences between data for moving and stationary stimuli. We demonstrate that these abilities can be acquired by learning to predict the sensory input. The model makes specific predictions and provides a unifying framework that has the potential to be extended to other visual event categories. Copyright © 2010 Elsevier Inc. All rights reserved.

  15. Objective and subjective assessment of tonal components in noise from UK wind farm sites

    International Nuclear Information System (INIS)

    McKenzie, A.R.

    1997-01-01

    The level of any tonal components in the noise from a wind farm site can be quantified using objective analysis procedures. These procedures are, however, open to a certain amount of interpretation. an automated assessment procedure has, therefore, been developed which is appropriate to the needs of the wind turbine industry. This paper describes a study to compare the results of objective assessments carried out using this method with the results of carefully controlled subjective listening tests for samples of wind turbine noise from nine U.K. wind farm sites. (author)

  16. Object selection costs in visual working memory: A diffusion model analysis of the focus of attention.

    Science.gov (United States)

    Sewell, David K; Lilburn, Simon D; Smith, Philip L

    2016-11-01

    A central question in working memory research concerns the degree to which information in working memory is accessible to other cognitive processes (e.g., decision-making). Theories assuming that the focus of attention can only store a single object at a time require the focus to orient to a target representation before further processing can occur. The need to orient the focus of attention implies that single-object accounts typically predict response time costs associated with object selection even when working memory is not full (i.e., memory load is less than 4 items). For other theories that assume storage of multiple items in the focus of attention, predictions depend on specific assumptions about the way resources are allocated among items held in the focus, and how this affects the time course of retrieval of items from the focus. These broad theoretical accounts have been difficult to distinguish because conventional analyses fail to separate components of empirical response times related to decision-making from components related to selection and retrieval processes associated with accessing information in working memory. To better distinguish these response time components from one another, we analyze data from a probed visual working memory task using extensions of the diffusion decision model. Analysis of model parameters revealed that increases in memory load resulted in (a) reductions in the quality of the underlying stimulus representations in a manner consistent with a sample size model of visual working memory capacity and (b) systematic increases in the time needed to selectively access a probed representation in memory. The results are consistent with single-object theories of the focus of attention. The results are also consistent with a subset of theories that assume a multiobject focus of attention in which resource allocation diminishes both the quality and accessibility of the underlying representations. (PsycINFO Database Record (c) 2016

  17. A Moving Object Detection Algorithm Based on Color Information

    International Nuclear Information System (INIS)

    Fang, X H; Xiong, W; Hu, B J; Wang, L T

    2006-01-01

    This paper designed a new algorithm of moving object detection for the aim of quick moving object detection and orientation, which used a pixel and its neighbors as an image vector to represent that pixel and modeled different chrominance component pixel as a mixture of Gaussians, and set up different mixture model of Gauss for different YUV chrominance components. In order to make full use of the spatial information, color segmentation and background model were combined. Simulation results show that the algorithm can detect intact moving objects even when the foreground has low contrast with background

  18. 3-D inelastic analysis methods for hot section components. Volume 2: Advanced special functions models

    Science.gov (United States)

    Wilson, R. B.; Banerjee, P. K.

    1987-01-01

    This Annual Status Report presents the results of work performed during the third year of the 3-D Inelastic Analysis Methods for Hot Sections Components program (NASA Contract NAS3-23697). The objective of the program is to produce a series of computer codes that permit more accurate and efficient three-dimensional analyses of selected hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The computer codes embody a progression of mathematical models and are streamlined to take advantage of geometrical features, loading conditions, and forms of material response that distinguish each group of selected components.

  19. Ferromanganese Furnace Modelling Using Object-Oriented Principles

    Energy Technology Data Exchange (ETDEWEB)

    Wasboe, S.O.

    1996-12-31

    This doctoral thesis defines an object-oriented framework for aiding unit process modelling and applies it to model high-carbon ferromanganese furnaces. A framework is proposed for aiding modelling of the internal topology and the phenomena taking place inside unit processes. Complex unit processes may consist of a number of zones where different phenomena take place. A topology is therefore defined for the unit process itself, which shows the relations between the zones. Inside each zone there is a set of chemical species and phenomena, such as reactions, phase transitions, heat transfer etc. A formalized graphical methodology is developed as a tool for modelling these zones and their interaction. The symbols defined in the graphical framework are associated with objects and classes. The rules for linking the objects are described using OMT (Object Modeling Technique) diagrams and formal language formulations. The basic classes that are defined are implemented using the C++ programming language. The ferromanganese process is a complex unit process. A general description of the process equipment is given, and a detailed discussion of the process itself and a system theoretical overview of it. The object-oriented framework is then used to develop a dynamic model based on mass and energy balances. The model is validated by measurements from an industrial furnace. 101 refs., 119 figs., 20 tabs.

  20. Integration of Simulink Models with Component-based Software Models

    Directory of Open Access Journals (Sweden)

    MARIAN, N.

    2008-06-01

    Full Text Available Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behavior as a means of computation, communication and constraints, using computational blocks and aggregates for both discrete and continuous behavior, different interconnection and execution disciplines for event-based and time-based controllers, and so on, to encompass the demands to more functionality, at even lower prices, and with opposite constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI, University of Southern Denmark. Once specified, the software model has to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set of MATLAB/Simulink blocks to COMDES software components, both for continuous and discrete behavior, and the transformation of the software system into the S

  1. An object-oriented approach to energy-economic modeling

    Energy Technology Data Exchange (ETDEWEB)

    Wise, M.A.; Fox, J.A.; Sands, R.D.

    1993-12-01

    In this paper, the authors discuss the experiences in creating an object-oriented economic model of the U.S. energy and agriculture markets. After a discussion of some central concepts, they provide an overview of the model, focusing on the methodology of designing an object-oriented class hierarchy specification based on standard microeconomic production functions. The evolution of the model from the class definition stage to programming it in C++, a standard object-oriented programming language, will be detailed. The authors then discuss the main differences between writing the object-oriented program versus a procedure-oriented program of the same model. Finally, they conclude with a discussion of the advantages and limitations of the object-oriented approach based on the experience in building energy-economic models with procedure-oriented approaches and languages.

  2. Roadmap for Lean implementation in Indian automotive component manufacturing industry: comparative study of UNIDO Model and ISM Model

    Science.gov (United States)

    Jadhav, J. R.; Mantha, S. S.; Rane, S. B.

    2015-06-01

    The demands for automobiles increased drastically in last two and half decades in India. Many global automobile manufacturers and Tier-1 suppliers have already set up research, development and manufacturing facilities in India. The Indian automotive component industry started implementing Lean practices to fulfill the demand of these customers. United Nations Industrial Development Organization (UNIDO) has taken proactive approach in association with Automotive Component Manufacturers Association of India (ACMA) and the Government of India to assist Indian SMEs in various clusters since 1999 to make them globally competitive. The primary objectives of this research are to study the UNIDO-ACMA Model as well as ISM Model of Lean implementation and validate the ISM Model by comparing with UNIDO-ACMA Model. It also aims at presenting a roadmap for Lean implementation in Indian automotive component industry. This paper is based on secondary data which include the research articles, web articles, doctoral thesis, survey reports and books on automotive industry in the field of Lean, JIT and ISM. ISM Model for Lean practice bundles was developed by authors in consultation with Lean practitioners. The UNIDO-ACMA Model has six stages whereas ISM Model has eight phases for Lean implementation. The ISM-based Lean implementation model is validated through high degree of similarity with UNIDO-ACMA Model. The major contribution of this paper is the proposed ISM Model for sustainable Lean implementation. The ISM-based Lean implementation framework presents greater insight of implementation process at more microlevel as compared to UNIDO-ACMA Model.

  3. Learning models of activities involving interacting objects

    DEFF Research Database (Denmark)

    Manfredotti, Cristina; Pedersen, Kim Steenstrup; Hamilton, Howard J.

    2013-01-01

    We propose the LEMAIO multi-layer framework, which makes use of hierarchical abstraction to learn models for activities involving multiple interacting objects from time sequences of data concerning the individual objects. Experiments in the sea navigation domain yielded learned models that were t...

  4. Object interaction competence model v. 2.0

    DEFF Research Database (Denmark)

    Bennedsen, Jens; Schulte, C.

    2013-01-01

    Teaching and learning object oriented programming has to take into account the specific object oriented characteristics of program execution, namely the interaction of objects during runtime. Prior to the research reported in this article, we have developed a competence model for object interaction...

  5. Object-oriented process dose modeling for glovebox operations

    International Nuclear Information System (INIS)

    Boerigter, S.T.; Fasel, J.H.; Kornreich, D.E.

    1999-01-01

    The Plutonium Facility at Los Alamos National Laboratory supports several defense and nondefense-related missions for the country by performing fabrication, surveillance, and research and development for materials and components that contain plutonium. Most operations occur in rooms with one or more arrays of gloveboxes connected to each other via trolley gloveboxes. Minimizing the effective dose equivalent (EDE) is a growing concern as a result of steadily declining allowable dose limits being imposed and a growing general awareness of safety in the workplace. In general, the authors discriminate three components of a worker's total EDE: the primary EDE, the secondary EDE, and background EDE. A particular background source of interest is the nuclear materials vault. The distinction between sources inside and outside of a particular room is arbitrary with the underlying assumption that building walls and floors provide significant shielding to justify including sources in other rooms in the background category. Los Alamos has developed the Process Modeling System (ProMoS) primarily for performing process analyses of nuclear operations. ProMoS is an object-oriented, discrete-event simulation package that has been used to analyze operations at Los Alamos and proposed facilities such as the new fabrication facilities for the Complex-21 effort. In the past, crude estimates of the process dose (the EDE received when a particular process occurred), room dose (the EDE received when a particular process occurred in a given room), and facility dose (the EDE received when a particular process occurred in the facility) were used to obtain an integrated EDE for a given process. Modifications to the ProMoS package were made to utilize secondary dose information to use dose modeling to enhance the process modeling efforts

  6. A multi-objective reliable programming model for disruption in supply chain

    Directory of Open Access Journals (Sweden)

    Emran Mohammadi

    2013-05-01

    Full Text Available One of the primary concerns on supply chain management is to handle risk components, properly. There are various reasons for having risk in supply chain such as natural disasters, unexpected incidents, etc. When a series of facilities are built and deployed, one or a number of them could probably fail at any time due to bad weather conditions, labor strikes, economic crises, sabotage or terrorist attacks and changes in ownership of the system. The objective of risk management is to reduce the effects of different domains to an acceptable level. To overcome the risk, we propose a reliable capacitated supply chain network design (RSCND model by considering random disruptions risk in both distribution centers and suppliers. The proposed study of this paper considers three objective functions and the implementation is verified using some instance.

  7. Association of objectively measured physical activity with body components in European adolescents.

    Science.gov (United States)

    Jiménez-Pavón, David; Fernández-Vázquez, Amaya; Alexy, Ute; Pedrero, Raquel; Cuenca-García, Magdalena; Polito, Angela; Vanhelst, Jérémy; Manios, Yannis; Kafatos, Anthony; Molnar, Dénes; Sjöström, Michael; Moreno, Luis A

    2013-07-18

    Physical activity (PA) is suggested to contribute to fat loss not only through increasing energy expenditure "per se" but also increasing muscle mass; therefore, it would be interesting to better understand the specific associations of PA with the different body's components such as fat mass and muscle mass. The aim of the present study was to examine the association between objectively measured PA and indices of fat mass and muscle components independently of each other giving, at the same time, gender-specific information in a wide cohort of European adolescents. A cross-sectional study in a school setting was conducted in 2200 (1016 males) adolescents (14.7 ± 1.2 years). Weight, height, skinfold thickness, bioimpedance and PA (accelerometry) were measured. Indices of fat mass (body mass index, % fat mass, sum of skinfolds) and muscular component (assessed as fat-free mass) were calculated. Multiple regression analyses were performed adjusting for several confounders including fat-free mass and fat mass when possible. Vigorous PA was positively associated with height (pgenders, except for average PA in relation with body mass index in females. Regarding muscular components, vigorous PA showed positive associations with fat-free mass and muscle mass (all pgenders. Average PA was positively associated with fat-free mass (both p<0.05) in males and females. The present study suggests that PA, especially vigorous PA, is negatively associated with indices of fat mass and positively associated with markers of muscle mass, after adjusting for several confounders (including indices of fat mass and muscle mass when possible). Future studies should focus not only on the classical relationship between PA and fat mass, but also on PA and muscular components, analyzing the independent role of both with the different PA intensities.

  8. Concurrent Models for Object Execution

    OpenAIRE

    Diertens, Bob

    2012-01-01

    In previous work we developed a framework of computational models for the concurrent execution of functions on different levels of abstraction. It shows that the traditional sequential execution of function is just a possible implementation of an abstract computational model that allows for the concurrent execution of functions. We use this framework as base for the development of abstract computational models that allow for the concurrent execution of objects.

  9. Model-integrating software components engineering flexible software systems

    CERN Document Server

    Derakhshanmanesh, Mahdi

    2015-01-01

    In his study, Mahdi Derakhshanmanesh builds on the state of the art in modeling by proposing to integrate models into running software on the component-level without translating them to code. Such so-called model-integrating software exploits all advantages of models: models implicitly support a good separation of concerns, they are self-documenting and thus improve understandability and maintainability and in contrast to model-driven approaches there is no synchronization problem anymore between the models and the code generated from them. Using model-integrating components, software will be

  10. Model reduction by weighted Component Cost Analysis

    Science.gov (United States)

    Kim, Jae H.; Skelton, Robert E.

    1990-01-01

    Component Cost Analysis considers any given system driven by a white noise process as an interconnection of different components, and assigns a metric called 'component cost' to each component. These component costs measure the contribution of each component to a predefined quadratic cost function. A reduced-order model of the given system may be obtained by deleting those components that have the smallest component costs. The theory of Component Cost Analysis is extended to include finite-bandwidth colored noises. The results also apply when actuators have dynamics of their own. Closed-form analytical expressions of component costs are also derived for a mechanical system described by its modal data. This is very useful to compute the modal costs of very high order systems. A numerical example for MINIMAST system is presented.

  11. Efficient transfer of sensitivity information in multi-component models

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Rabiti, Cristian

    2011-01-01

    In support of adjoint-based sensitivity analysis, this manuscript presents a new method to efficiently transfer adjoint information between components in a multi-component model, whereas the output of one component is passed as input to the next component. Often, one is interested in evaluating the sensitivities of the responses calculated by the last component to the inputs of the first component in the overall model. The presented method has two advantages over existing methods which may be classified into two broad categories: brute force-type methods and amalgamated-type methods. First, the presented method determines the minimum number of adjoint evaluations for each component as opposed to the brute force-type methods which require full evaluation of all sensitivities for all responses calculated by each component in the overall model, which proves computationally prohibitive for realistic problems. Second, the new method treats each component as a black-box as opposed to amalgamated-type methods which requires explicit knowledge of the system of equations associated with each component in order to reach the minimum number of adjoint evaluations. (author)

  12. Objective selection of EEG late potentials through residual dependence estimation of independent components

    International Nuclear Information System (INIS)

    Milanesi, M; James, C J; Martini, N; Menicucci, D; Gemignani, A; Ghelarducci, B; Landini, L

    2009-01-01

    This paper presents a novel method to objectively select electroencephalographic (EEG) cortical sources estimated by independent component analysis (ICA) in event-related potential (ERP) studies. A proximity measure based on mutual information is employed to estimate residual dependences of the components that are then hierarchically clustered based on these residual dependences. Next, the properties of each group of components are evaluated at each level of the hierarchical tree by two indices that aim to assess both cluster tightness and physiological reliability through a template matching process. These two indices are combined in three different approaches to bring to light the hierarchical structure of the cluster organizations. Our method is tested on a set of experiments with the purpose of enhancing late positive ERPs elicited by emotional picture stimuli. Results suggest that the best way to look for physiologically plausible late positive potential (LPP) sources is to explore in depth the tightness of those clusters that, taken together, best resemble the template. According to our results, after brain sources clustering, LPPs are always identified more accurately than from ensemble-averaged raw data. Since the late components of an ERP involve the same associative areas, regardless of the modality of stimulation or specific tasks administered, the proposed method can be simply adapted to other ERP studies, and extended from psychophysiological studies to pathological or sport training evaluation support

  13. A model of objective weighting for EIA.

    Science.gov (United States)

    Ying, L G; Liu, Y C

    1995-06-01

    In spite of progress achieved in the research of environmental impact assessment (EIA), the problem of weight distribution for a set of parameters has not as yet, been properly solved. This paper presents an approach of objective weighting by using a procedure of P ij principal component-factor analysis (P ij PCFA), which suits specifically those parameters measured directly by physical scales. The P ij PCFA weighting procedure reforms the conventional weighting practice in two aspects: first, the expert subjective judgment is replaced by the standardized measure P ij as the original input of weight processing and, secondly, the principal component-factor analysis is introduced to approach the environmental parameters for their respective contributions to the totality of the regional ecosystem. Not only is the P ij PCFA weighting logical in theoretical reasoning, it also suits practically all levels of professional routines in natural environmental assessment and impact analysis. Having been assured of objectivity and accuracy in the EIA case study of the Chuansha County in Shanghai, China, the P ij PCFA weighting procedure has the potential to be applied in other geographical fields that need assigning weights to parameters that are measured by physical scales.

  14. Pump Component Model in SPACE Code

    International Nuclear Information System (INIS)

    Kim, Byoung Jae; Kim, Kyoung Doo

    2010-08-01

    This technical report describes the pump component model in SPACE code. A literature survey was made on pump models in existing system codes. The models embedded in SPACE code were examined to check the confliction with intellectual proprietary rights. Design specifications, computer coding implementation, and test results are included in this report

  15. Modeling a terminology-based electronic nursing record system: an object-oriented approach.

    Science.gov (United States)

    Park, Hyeoun-Ae; Cho, InSook; Byeun, NamSoo

    2007-10-01

    The aim of this study was to present our perspectives on healthcare information analysis at a conceptual level and the lessons learned from our experience with the development of a terminology-based enterprise electronic nursing record system - which was one of components in an EMR system at a tertiary teaching hospital in Korea - using an object-oriented system analysis and design concept. To ensure a systematic approach and effective collaboration, the department of nursing constituted a system modeling team comprising a project manager, systems analysts, user representatives, an object-oriented methodology expert, and healthcare informaticists (including the authors). A rational unified process (RUP) and the Unified Modeling Language were used as a development process and for modeling notation, respectively. From the scenario and RUP approach, user requirements were formulated into use case sets and the sequence of activities in the scenario was depicted in an activity diagram. The structure of the system was presented in a class diagram. This approach allowed us to identify clearly the structural and behavioral states and important factors of a terminology-based ENR system (e.g., business concerns and system design concerns) according to the viewpoints of both domain and technical experts.

  16. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian....

  17. Step wise, multiple objective calibration of a hydrologic model for a snowmelt dominated basin

    Science.gov (United States)

    Hay, L.E.; Leavesley, G.H.; Clark, M.P.; Markstrom, S.L.; Viger, R.J.; Umemoto, M.

    2006-01-01

    The ability to apply a hydrologic model to large numbers of basins for forecasting purposes requires a quick and effective calibration strategy. This paper presents a step wise, multiple objective, automated procedure for hydrologic model calibration. This procedure includes the sequential calibration of a model's simulation of solar radiation (SR), potential evapotranspiration (PET), water balance, and daily runoff. The procedure uses the Shuffled Complex Evolution global search algorithm to calibrate the U.S. Geological Survey's Precipitation Runoff Modeling System in the Yampa River basin of Colorado. This process assures that intermediate states of the model (SR and PET on a monthly mean basis), as well as the water balance and components of the daily hydrograph are simulated, consistently with measured values.

  18. How Many Separable Sources? Model Selection In Independent Components Analysis

    Science.gov (United States)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  19. A Bayesian alternative for multi-objective ecohydrological model specification

    Science.gov (United States)

    Tang, Yating; Marshall, Lucy; Sharma, Ashish; Ajami, Hoori

    2018-01-01

    Recent studies have identified the importance of vegetation processes in terrestrial hydrologic systems. Process-based ecohydrological models combine hydrological, physical, biochemical and ecological processes of the catchments, and as such are generally more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov chain Monte Carlo (MCMC) techniques. The Bayesian approach offers an appealing alternative to traditional multi-objective hydrologic model calibrations by defining proper prior distributions that can be considered analogous to the ad-hoc weighting often prescribed in multi-objective calibration. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological modeling framework based on a traditional Pareto-based model calibration technique. In our study, a Pareto-based multi-objective optimization and a formal Bayesian framework are implemented in a conceptual ecohydrological model that combines a hydrological model (HYMOD) and a modified Bucket Grassland Model (BGM). Simulations focused on one objective (streamflow/LAI) and multiple objectives (streamflow and LAI) with different emphasis defined via the prior distribution of the model error parameters. Results show more reliable outputs for both predicted streamflow and LAI using Bayesian multi-objective calibration with specified prior distributions for error parameters based on results from the Pareto front in the ecohydrological modeling. The methodology implemented here provides insight into the usefulness of multiobjective Bayesian calibration for ecohydrologic systems and the importance of appropriate prior

  20. On the effect of model parameters on forecast objects

    Science.gov (United States)

    Marzban, Caren; Jones, Corinne; Li, Ning; Sandgathe, Scott

    2018-04-01

    Many physics-based numerical models produce a gridded, spatial field of forecasts, e.g., a temperature map. The field for some quantities generally consists of spatially coherent and disconnected objects. Such objects arise in many problems, including precipitation forecasts in atmospheric models, eddy currents in ocean models, and models of forest fires. Certain features of these objects (e.g., location, size, intensity, and shape) are generally of interest. Here, a methodology is developed for assessing the impact of model parameters on the features of forecast objects. The main ingredients of the methodology include the use of (1) Latin hypercube sampling for varying the values of the model parameters, (2) statistical clustering algorithms for identifying objects, (3) multivariate multiple regression for assessing the impact of multiple model parameters on the distribution (across the forecast domain) of object features, and (4) methods for reducing the number of hypothesis tests and controlling the resulting errors. The final output of the methodology is a series of box plots and confidence intervals that visually display the sensitivities. The methodology is demonstrated on precipitation forecasts from a mesoscale numerical weather prediction model.

  1. A principal components model of soundscape perception.

    Science.gov (United States)

    Axelsson, Östen; Nilsson, Mats E; Berglund, Birgitta

    2010-11-01

    There is a need for a model that identifies underlying dimensions of soundscape perception, and which may guide measurement and improvement of soundscape quality. With the purpose to develop such a model, a listening experiment was conducted. One hundred listeners measured 50 excerpts of binaural recordings of urban outdoor soundscapes on 116 attribute scales. The average attribute scale values were subjected to principal components analysis, resulting in three components: Pleasantness, eventfulness, and familiarity, explaining 50, 18 and 6% of the total variance, respectively. The principal-component scores were correlated with physical soundscape properties, including categories of dominant sounds and acoustic variables. Soundscape excerpts dominated by technological sounds were found to be unpleasant, whereas soundscape excerpts dominated by natural sounds were pleasant, and soundscape excerpts dominated by human sounds were eventful. These relationships remained after controlling for the overall soundscape loudness (Zwicker's N(10)), which shows that 'informational' properties are substantial contributors to the perception of soundscape. The proposed principal components model provides a framework for future soundscape research and practice. In particular, it suggests which basic dimensions are necessary to measure, how to measure them by a defined set of attribute scales, and how to promote high-quality soundscapes.

  2. An object-based visual attention model for robotic applications.

    Science.gov (United States)

    Yu, Yuanlong; Mann, George K I; Gosine, Raymond G

    2010-10-01

    By extending integrated competition hypothesis, this paper presents an object-based visual attention model, which selects one object of interest using low-dimensional features, resulting that visual perception starts from a fast attentional selection procedure. The proposed attention model involves seven modules: learning of object representations stored in a long-term memory (LTM), preattentive processing, top-down biasing, bottom-up competition, mediation between top-down and bottom-up ways, generation of saliency maps, and perceptual completion processing. It works in two phases: learning phase and attending phase. In the learning phase, the corresponding object representation is trained statistically when one object is attended. A dual-coding object representation consisting of local and global codings is proposed. Intensity, color, and orientation features are used to build the local coding, and a contour feature is employed to constitute the global coding. In the attending phase, the model preattentively segments the visual field into discrete proto-objects using Gestalt rules at first. If a task-specific object is given, the model recalls the corresponding representation from LTM and deduces the task-relevant feature(s) to evaluate top-down biases. The mediation between automatic bottom-up competition and conscious top-down biasing is then performed to yield a location-based saliency map. By combination of location-based saliency within each proto-object, the proto-object-based saliency is evaluated. The most salient proto-object is selected for attention, and it is finally put into the perceptual completion processing module to yield a complete object region. This model has been applied into distinct tasks of robots: detection of task-specific stationary and moving objects. Experimental results under different conditions are shown to validate this model.

  3. A kinetic model for impact/sliding wear of pressurized water reactor internal components. Application to rod cluster control assemblies

    International Nuclear Information System (INIS)

    Zbinden, M.; Durbec, V.

    1996-12-01

    A new concept of industrial wear model adapted to components of nuclear plants is proposed. Its originality is to be supported, on one hand, by experimental results obtained via wear machines of relatively short operational times, and, on the other hand, by the information obtained from the operating feedback over real wear kinetics of the reactors components. The proposed model is illustrated by an example which corresponds to a specific real situation. The determination of the coefficients permitting to cover all assembly of configurations and the validation of the model in these configurations have been the object of the most recent work. (author)

  4. A probabilistic model for component-based shape synthesis

    KAUST Repository

    Kalogerakis, Evangelos

    2012-07-01

    We present an approach to synthesizing shapes from complex domains, by identifying new plausible combinations of components from existing shapes. Our primary contribution is a new generative model of component-based shape structure. The model represents probabilistic relationships between properties of shape components, and relates them to learned underlying causes of structural variability within the domain. These causes are treated as latent variables, leading to a compact representation that can be effectively learned without supervision from a set of compatibly segmented shapes. We evaluate the model on a number of shape datasets with complex structural variability and demonstrate its application to amplification of shape databases and to interactive shape synthesis. © 2012 ACM 0730-0301/2012/08-ART55.

  5. Object-oriented approach for gas turbine engine simulation

    Science.gov (United States)

    Curlett, Brian P.; Felder, James L.

    1995-01-01

    An object-oriented gas turbine engine simulation program was developed. This program is a prototype for a more complete, commercial grade engine performance program now being proposed as part of the Numerical Propulsion System Simulator (NPSS). This report discusses architectural issues of this complex software system and the lessons learned from developing the prototype code. The prototype code is a fully functional, general purpose engine simulation program, however, only the component models necessary to model a transient compressor test rig have been written. The production system will be capable of steady state and transient modeling of almost any turbine engine configuration. Chief among the architectural considerations for this code was the framework in which the various software modules will interact. These modules include the equation solver, simulation code, data model, event handler, and user interface. Also documented in this report is the component based design of the simulation module and the inter-component communication paradigm. Object class hierarchies for some of the code modules are given.

  6. Pricing end-of-life components

    Science.gov (United States)

    Vadde, Srikanth; Kamarthi, Sagar V.; Gupta, Surendra M.

    2005-11-01

    The main objective of a product recovery facility (PRF) is to disassemble end-of-life (EOL) products and sell the reclaimed components for reuse and recovered materials in second-hand markets. Variability in the inflow of EOL products and fluctuation in demand for reusable components contribute to the volatility in inventory levels. To stay profitable the PRFs ought to manage their inventory by regulating the price appropriately to minimize holding costs. This work presents two deterministic pricing models for a PRF bounded by environmental regulations. In the first model, the demand is price dependent and in the second, the demand is both price and time dependent. The models are valid for single component with no inventory replenishment sale during the selling horizon . Numerical examples are presented to illustrate the models.

  7. A COMPARATIVE STUDY ON OBJECTIVES AND COMPONENTS OF READING SKILL IN NATIONAL CURRICULUM OF IRAN AND AMERICA (NEW JERSEY AT HIGH SCHOOL

    Directory of Open Access Journals (Sweden)

    Elham Ghaderi Doust

    2016-12-01

    Full Text Available This study aims to provide a preliminary of the codification of the objectives and components of teaching reading within the National Language curriculum in upper secondary in correlation with elementary and lower secondary curriculums. The data includes the Persian Language curriculum in Iranian upper secondary schools (version 2007 and American core curriculum for Language Arts (in New Jersey, 2004 collected through library study and note taking from Iran and foreign documents. The data have been analyzed qualitatively (through grounded theory method at the secondary level. In the present research, objectives and components of teaching reading within the curriculums of mentioned countries are analyzed and surveyed based on Autonomous and Ideological approaches to literacy; suggesting that objectives and components of American curriculum for teaching reading are formularized and influenced by Ideological approach, whereas Iranian reading curriculum owns the features of Autonomous Approach (consciously or unconsciously and features related to Ideological approach are negligible in Iran. After discussing characteristics of curriculums in America (New Jersey, influenced by the Ideological approach to literacy, the merits and demerits of objectives and components of Iranian curriculum for teaching reading and some proposals to refine have been cited.

  8. Protein Nano-Object Integrator (ProNOI for generating atomic style objects for molecular modeling

    Directory of Open Access Journals (Sweden)

    Smith Nicholas

    2012-12-01

    Full Text Available Abstract Background With the progress of nanotechnology, one frequently has to model biological macromolecules simultaneously with nano-objects. However, the atomic structures of the nano objects are typically not available or they are solid state entities. Because of that, the researchers have to investigate such nano systems by generating models of the nano objects in a manner that the existing software be able to carry the simulations. In addition, it should allow generating composite objects with complex shape by combining basic geometrical figures and embedding biological macromolecules within the system. Results Here we report the Protein Nano-Object Integrator (ProNOI which allows for generating atomic-style geometrical objects with user desired shape and dimensions. Unlimited number of objects can be created and combined with biological macromolecules in Protein Data Bank (PDB format file. Once the objects are generated, the users can use sliders to manipulate their shape, dimension and absolute position. In addition, the software offers the option to charge the objects with either specified surface or volumetric charge density and to model them with user-desired dielectric constants. According to the user preference, the biological macromolecule atoms can be assigned charges and radii according to four different force fields: Amber, Charmm, OPLS and PARSE. The biological macromolecules and the atomic-style objects are exported as a position, charge and radius (PQR file, or if a default dielectric constant distribution is not selected, it is exported as a position, charge, radius and epsilon (PQRE file. As illustration of the capabilities of the ProNOI, we created a composite object in a shape of a robot, aptly named the Clemson Robot, whose parts are charged with various volumetric charge densities and holds the barnase-barstar protein complex in its hand. Conclusions The Protein Nano-Object Integrator (ProNOI is a convenient tool for

  9. Real time natural object modeling framework

    International Nuclear Information System (INIS)

    Rana, H.A.; Shamsuddin, S.M.; Sunar, M.H.

    2008-01-01

    CG (Computer Graphics) is a key technology for producing visual contents. Currently computer generated imagery techniques are being developed and applied, particularly in the field of virtual reality applications, film production, training and flight simulators, to provide total composition of realistic computer graphic images. Natural objects like clouds are an integral feature of the sky without them synthetic outdoor scenes seem unrealistic. Modeling and animating such objects is a difficult task. Most systems are difficult to use, as they require adjustment of numerous, complex parameters and are non-interactive. This paper presents an intuitive, interactive system to artistically model, animate, and render visually convincing clouds using modern graphics hardware. A high-level interface models clouds through the visual use of cubes. Clouds are rendered by making use of hardware accelerated API -OpenGL. The resulting interactive design and rendering system produces perceptually convincing cloud models that can be used in any interactive system. (author)

  10. Component-based modeling of systems for automated fault tree generation

    International Nuclear Information System (INIS)

    Majdara, Aref; Wakabayashi, Toshio

    2009-01-01

    One of the challenges in the field of automated fault tree construction is to find an efficient modeling approach that can support modeling of different types of systems without ignoring any necessary details. In this paper, we are going to represent a new system of modeling approach for computer-aided fault tree generation. In this method, every system model is composed of some components and different types of flows propagating through them. Each component has a function table that describes its input-output relations. For the components having different operational states, there is also a state transition table. Each component can communicate with other components in the system only through its inputs and outputs. A trace-back algorithm is proposed that can be applied to the system model to generate the required fault trees. The system modeling approach and the fault tree construction algorithm are applied to a fire sprinkler system and the results are presented

  11. Algorithmic fault tree construction by component-based system modeling

    International Nuclear Information System (INIS)

    Majdara, Aref; Wakabayashi, Toshio

    2008-01-01

    Computer-aided fault tree generation can be easier, faster and less vulnerable to errors than the conventional manual fault tree construction. In this paper, a new approach for algorithmic fault tree generation is presented. The method mainly consists of a component-based system modeling procedure an a trace-back algorithm for fault tree synthesis. Components, as the building blocks of systems, are modeled using function tables and state transition tables. The proposed method can be used for a wide range of systems with various kinds of components, if an inclusive component database is developed. (author)

  12. Parameter estimation of component reliability models in PSA model of Krsko NPP

    International Nuclear Information System (INIS)

    Jordan Cizelj, R.; Vrbanic, I.

    2001-01-01

    In the paper, the uncertainty analysis of component reliability models for independent failures is shown. The present approach for parameter estimation of component reliability models in NPP Krsko is presented. Mathematical approaches for different types of uncertainty analyses are introduced and used in accordance with some predisposed requirements. Results of the uncertainty analyses are shown in an example for time-related components. As the most appropriate uncertainty analysis proved the Bayesian estimation with the numerical estimation of a posterior, which can be approximated with some appropriate probability distribution, in this paper with lognormal distribution.(author)

  13. An application of object-oriented programming to process simulation

    International Nuclear Information System (INIS)

    Robinson, J.T.; Otaduy, P.J.

    1988-01-01

    This paper discusses the application of object-oriented programming to dynamic simulation of continuous processes. Processes may be modeled using this technique as a collection of objects which communicate with each other via message passing. Arriving messages invoke methods that describe the state and/or dynamic behavior of the receiving object. The objects fall into four broad categories actual plant components such as pumps, pipes, and tanks, abstract objects such as heat sources and conductors, plant systems such as flow loops, and simulation control and interface objects. This technique differs from traditional approaches to process simulation, in which the process is represented by either a system of differential equations or a block diagram of mathematical operators. The use of objects minimizes the representational gap between the model and actual process. From the users point of view, construction of a simulation model becomes equivalent to drawing a plant schematic. As an example application, a package developed for the simulation of nuclear power plants is described. The package allows users to build simulation models by selecting iconic representations of plant components from a menu and connecting them with a mouse. Objects for generating a mathematical model of the system and for controlling the simulation are automatically generated, freeing the user to concentrate on describing his process. This example illustrates the use of object-oriented programming to create a highly interactive and automated simulation environment. 2 figs

  14. A kinetic model for impact/sliding wear of pressurized water reactor internal components. Application to rod cluster control assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Zbinden, M; Durbec, V

    1996-12-01

    A new concept of industrial wear model adapted to components of nuclear plants is proposed. Its originality is to be supported, on one hand, by experimental results obtained via wear machines of relatively short operational times, and, on the other hand, by the information obtained from the operating feedback over real wear kinetics of the reactors components. The proposed model is illustrated by an example which corresponds to a specific real situation. The determination of the coefficients permitting to cover all assembly of configurations and the validation of the model in these configurations have been the object of the most recent work. (author). 34 refs.

  15. Overview of the model component in ECOCLIM

    DEFF Research Database (Denmark)

    Geels, Camilla; Boegh, Eva; Bendtsen, J

    and atmospheric models. We will use the model system to 1) quantify the potential effects of climate change on ecosystem exchange of GHG and 2) estimate the impacts of changes in management practices including land use change and nitrogen (N) loads. Here the various model components will be introduced...

  16. MULTIPLE OBJECTS

    Directory of Open Access Journals (Sweden)

    A. A. Bosov

    2015-04-01

    Full Text Available Purpose. The development of complicated techniques of production and management processes, information systems, computer science, applied objects of systems theory and others requires improvement of mathematical methods, new approaches for researches of application systems. And the variety and diversity of subject systems makes necessary the development of a model that generalizes the classical sets and their development – sets of sets. Multiple objects unlike sets are constructed by multiple structures and represented by the structure and content. The aim of the work is the analysis of multiple structures, generating multiple objects, the further development of operations on these objects in application systems. Methodology. To achieve the objectives of the researches, the structure of multiple objects represents as constructive trio, consisting of media, signatures and axiomatic. Multiple object is determined by the structure and content, as well as represented by hybrid superposition, composed of sets, multi-sets, ordered sets (lists and heterogeneous sets (sequences, corteges. Findings. In this paper we study the properties and characteristics of the components of hybrid multiple objects of complex systems, proposed assessments of their complexity, shown the rules of internal and external operations on objects of implementation. We introduce the relation of arbitrary order over multiple objects, we define the description of functions and display on objects of multiple structures. Originality.In this paper we consider the development of multiple structures, generating multiple objects.Practical value. The transition from the abstract to the subject of multiple structures requires the transformation of the system and multiple objects. Transformation involves three successive stages: specification (binding to the domain, interpretation (multiple sites and particularization (goals. The proposed describe systems approach based on hybrid sets

  17. Probabilistic Modeling of Wind Turbine Drivetrain Components

    DEFF Research Database (Denmark)

    Rafsanjani, Hesam Mirzaei

    Wind energy is one of several energy sources in the world and a rapidly growing industry in the energy sector. When placed in offshore or onshore locations, wind turbines are exposed to wave excitations, highly dynamic wind loads and/or the wakes from other wind turbines. Therefore, most components...... in a wind turbine experience highly dynamic and time-varying loads. These components may fail due to wear or fatigue, and this can lead to unplanned shutdown repairs that are very costly. The design by deterministic methods using safety factors is generally unable to account for the many uncertainties. Thus......, a reliability assessment should be based on probabilistic methods where stochastic modeling of failures is performed. This thesis focuses on probabilistic models and the stochastic modeling of the fatigue life of the wind turbine drivetrain. Hence, two approaches are considered for stochastic modeling...

  18. Simulated lumbar minimally invasive surgery educational model with didactic and technical components.

    Science.gov (United States)

    Chitale, Rohan; Ghobrial, George M; Lobel, Darlene; Harrop, James

    2013-10-01

    The learning and development of technical skills are paramount for neurosurgical trainees. External influences and a need for maximizing efficiency and proficiency have encouraged advancements in simulator-based learning models. To confirm the importance of establishing an educational curriculum for teaching minimally invasive techniques of pedicle screw placement using a computer-enhanced physical model of percutaneous pedicle screw placement with simultaneous didactic and technical components. A 2-hour educational curriculum was created to educate neurosurgical residents on anatomy, pathophysiology, and technical aspects associated with image-guided pedicle screw placement. Predidactic and postdidactic practical and written scores were analyzed and compared. Scores were calculated for each participant on the basis of the optimal pedicle screw starting point and trajectory for both fluoroscopy and computed tomographic navigation. Eight trainees participated in this module. Average mean scores on the written didactic test improved from 78% to 100%. The technical component scores for fluoroscopic guidance improved from 58.8 to 52.9. Technical score for computed tomography-navigated guidance also improved from 28.3 to 26.6. Didactic and technical quantitative scores with a simulator-based educational curriculum improved objectively measured resident performance. A minimally invasive spine simulation model and curriculum may serve a valuable function in the education of neurosurgical residents and outcomes for patients.

  19. Statistical intercomparison of global climate models: A common principal component approach with application to GCM data

    International Nuclear Information System (INIS)

    Sengupta, S.K.; Boyle, J.S.

    1993-05-01

    Variables describing atmospheric circulation and other climate parameters derived from various GCMs and obtained from observations can be represented on a spatio-temporal grid (lattice) structure. The primary objective of this paper is to explore existing as well as some new statistical methods to analyze such data structures for the purpose of model diagnostics and intercomparison from a statistical perspective. Among the several statistical methods considered here, a new method based on common principal components appears most promising for the purpose of intercomparison of spatio-temporal data structures arising in the task of model/model and model/data intercomparison. A complete strategy for such an intercomparison is outlined. The strategy includes two steps. First, the commonality of spatial structures in two (or more) fields is captured in the common principal vectors. Second, the corresponding principal components obtained as time series are then compared on the basis of similarities in their temporal evolution

  20. A hierarchical probabilistic model for rapid object categorization in natural scenes.

    Directory of Open Access Journals (Sweden)

    Xiaofu He

    Full Text Available Humans can categorize objects in complex natural scenes within 100-150 ms. This amazing ability of rapid categorization has motivated many computational models. Most of these models require extensive training to obtain a decision boundary in a very high dimensional (e.g., ∼6,000 in a leading model feature space and often categorize objects in natural scenes by categorizing the context that co-occurs with objects when objects do not occupy large portions of the scenes. It is thus unclear how humans achieve rapid scene categorization.To address this issue, we developed a hierarchical probabilistic model for rapid object categorization in natural scenes. In this model, a natural object category is represented by a coarse hierarchical probability distribution (PD, which includes PDs of object geometry and spatial configuration of object parts. Object parts are encoded by PDs of a set of natural object structures, each of which is a concatenation of local object features. Rapid categorization is performed as statistical inference. Since the model uses a very small number (∼100 of structures for even complex object categories such as animals and cars, it requires little training and is robust in the presence of large variations within object categories and in their occurrences in natural scenes. Remarkably, we found that the model categorized animals in natural scenes and cars in street scenes with a near human-level performance. We also found that the model located animals and cars in natural scenes, thus overcoming a flaw in many other models which is to categorize objects in natural context by categorizing contextual features. These results suggest that coarse PDs of object categories based on natural object structures and statistical operations on these PDs may underlie the human ability to rapidly categorize scenes.

  1. Conditioning 3D object-based models to dense well data

    Science.gov (United States)

    Wang, Yimin C.; Pyrcz, Michael J.; Catuneanu, Octavian; Boisvert, Jeff B.

    2018-06-01

    Object-based stochastic simulation models are used to generate categorical variable models with a realistic representation of complicated reservoir heterogeneity. A limitation of object-based modeling is the difficulty of conditioning to dense data. One method to achieve data conditioning is to apply optimization techniques. Optimization algorithms can utilize an objective function measuring the conditioning level of each object while also considering the geological realism of the object. Here, an objective function is optimized with implicit filtering which considers constraints on object parameters. Thousands of objects conditioned to data are generated and stored in a database. A set of objects are selected with linear integer programming to generate the final realization and honor all well data, proportions and other desirable geological features. Although any parameterizable object can be considered, objects from fluvial reservoirs are used to illustrate the ability to simultaneously condition multiple types of geologic features. Channels, levees, crevasse splays and oxbow lakes are parameterized based on location, path, orientation and profile shapes. Functions mimicking natural river sinuosity are used for the centerline model. Channel stacking pattern constraints are also included to enhance the geological realism of object interactions. Spatial layout correlations between different types of objects are modeled. Three case studies demonstrate the flexibility of the proposed optimization-simulation method. These examples include multiple channels with high sinuosity, as well as fragmented channels affected by limited preservation. In all cases the proposed method reproduces input parameters for the object geometries and matches the dense well constraints. The proposed methodology expands the applicability of object-based simulation to complex and heterogeneous geological environments with dense sampling.

  2. Stochastic reservoir operation under drought with fuzzy objectives

    International Nuclear Information System (INIS)

    Parent, E.; Duckstein, L.

    1993-01-01

    Biojective reservoir operation under drought conditions is investigated using stochastic dynamic programming. As both objectives (irrigation water supply, water quality) can only be defined imprecisely, a fuzzy set approach is used to encode the decision maker (DM)'s preferences. The nature driven components are modeled by means of classical stage-state system analysis. The state is three dimensional (inflow memory, drought irrigation index, reservoir level); the decision vector elements are release and irrigation allocation. Stochasticity stems from the random nature of inflows and irrigation demands. The transition function includes a lag one inflow Markov model and mass balance equations. The human driven component is designed as a confluence of fuzzy objectives and constraints after Bellman and Zadeh. Fuzzy numbers are assessed to represent the DM's objectives by two different techniques, the direct one and indirect pairwise comparison. The real case study of the Neste river system in southwestern France is used to illustrate the approach; the result are compared to a classical sequential decision theoretical model derived earlier from the viewpoints of ease of modeling, computational efforts, plausibility and robustness of results

  3. Robustness of Component Models in Energy System Simulators

    DEFF Research Database (Denmark)

    Elmegaard, Brian

    2003-01-01

    During the development of the component-based energy system simulator DNA (Dynamic Network Analysis), several obstacles to easy use of the program have been observed. Some of these have to do with the nature of the program being based on a modelling language, not a graphical user interface (GUI......). Others have to do with the interaction between models of the nature of the substances in an energy system (e.g., fuels, air, flue gas), models of the components in a system (e.g., heat exchangers, turbines, pumps), and the solver for the system of equations. This paper proposes that the interaction...

  4. Model-based recognition of 3-D objects by geometric hashing technique

    International Nuclear Information System (INIS)

    Severcan, M.; Uzunalioglu, H.

    1992-09-01

    A model-based object recognition system is developed for recognition of polyhedral objects. The system consists of feature extraction, modelling and matching stages. Linear features are used for object descriptions. Lines are obtained from edges using rotation transform. For modelling and recognition process, geometric hashing method is utilized. Each object is modelled using 2-D views taken from the viewpoints on the viewing sphere. A hidden line elimination algorithm is used to find these views from the wire frame model of the objects. The recognition experiments yielded satisfactory results. (author). 8 refs, 5 figs

  5. Object-Oriented Approach to Modeling Units of Pneumatic Systems

    Directory of Open Access Journals (Sweden)

    Yu. V. Kyurdzhiev

    2014-01-01

    Full Text Available The article shows the relevance of the approaches to the object-oriented programming when modeling the pneumatic units (PU.Based on the analysis of the calculation schemes of aggregates pneumatic systems two basic objects, namely a cavity flow and a material point were highlighted.Basic interactions of objects are defined. Cavity-cavity interaction: ex-change of matter and energy with the flows of mass. Cavity-point interaction: force interaction, exchange of energy in the form of operation. Point-point in-teraction: force interaction, elastic interaction, inelastic interaction, and inter-vals of displacement.The authors have developed mathematical models of basic objects and interactions. Models and interaction of elements are implemented in the object-oriented programming.Mathematical models of elements of PU design scheme are implemented in derived from the base class. These classes implement the models of flow cavity, piston, diaphragm, short channel, diaphragm to be open by a given law, spring, bellows, elastic collision, inelastic collision, friction, PU stages with a limited movement, etc.A numerical integration of differential equations for the mathematical models of PU design scheme elements is based on the Runge-Kutta method of the fourth order. On request each class performs a tact of integration i.e. calcu-lation of the coefficient method.The paper presents an integration algorithm of the system of differential equations. All objects of the PU design scheme are placed in a unidirectional class list. Iterator loop cycle initiates the integration tact of all the objects in the list. One in four iteration makes a transition to the next step of integration. Calculation process stops when any object shows a shutdowns flag.The proposed approach was tested in the calculation of a number of PU designs. With regard to traditional approaches to modeling, the authors-proposed method features in easy enhancement, code reuse, high reliability

  6. Component based modelling of piezoelectric ultrasonic actuators for machining applications

    International Nuclear Information System (INIS)

    Saleem, A; Ahmed, N; Salah, M; Silberschmidt, V V

    2013-01-01

    Ultrasonically Assisted Machining (UAM) is an emerging technology that has been utilized to improve the surface finishing in machining processes such as turning, milling, and drilling. In this context, piezoelectric ultrasonic transducers are being used to vibrate the cutting tip while machining at predetermined amplitude and frequency. However, modelling and simulation of these transducers is a tedious and difficult task. This is due to the inherent nonlinearities associated with smart materials. Therefore, this paper presents a component-based model of ultrasonic transducers that mimics the nonlinear behaviour of such a system. The system is decomposed into components, a mathematical model of each component is created, and the whole system model is accomplished by aggregating the basic components' model. System parameters are identified using Finite Element technique which then has been used to simulate the system in Matlab/SIMULINK. Various operation conditions are tested and performed to demonstrate the system performance

  7. An object-oriented simulation package for power plants

    International Nuclear Information System (INIS)

    Robinson, J.T.; Otaduy, P.J.

    1987-01-01

    This paper describes the application of object-oriented programming to the simulation of continuous/discrete processes such as nuclear power plants. Systems are modeled using this technique as a collection of objects that communicate by passing messages, which invoke methods that describe the state and dynamic behavior of objects. The objects themselves generally correspond to actual plant components, thus minimizing the representational mismatch between actual and modeled systems and facilitating their validation. Several concepts of object-oriented programming, in particular classes, inheritance, and message passing, have proved to be very useful for simulation. The use of these features are discussed and illustrated with examples

  8. Segmentation of Concealed Objects in Passive Millimeter-Wave Images Based on the Gaussian Mixture Model

    Science.gov (United States)

    Yu, Wangyang; Chen, Xiangguang; Wu, Lei

    2015-04-01

    Passive millimeter wave (PMMW) imaging has become one of the most effective means to detect the objects concealed under clothing. Due to the limitations of the available hardware and the inherent physical properties of PMMW imaging systems, images often exhibit poor contrast and low signal-to-noise ratios. Thus, it is difficult to achieve ideal results by using a general segmentation algorithm. In this paper, an advanced Gaussian Mixture Model (GMM) algorithm for the segmentation of concealed objects in PMMW images is presented. Our work is concerned with the fact that the GMM is a parametric statistical model, which is often used to characterize the statistical behavior of images. Our approach is three-fold: First, we remove the noise from the image using both a notch reject filter and a total variation filter. Next, we use an adaptive parameter initialization GMM algorithm (APIGMM) for simulating the histogram of images. The APIGMM provides an initial number of Gaussian components and start with more appropriate parameter. Bayesian decision is employed to separate the pixels of concealed objects from other areas. At last, the confidence interval (CI) method, alongside local gradient information, is used to extract the concealed objects. The proposed hybrid segmentation approach detects the concealed objects more accurately, even compared to two other state-of-the-art segmentation methods.

  9. Modeling accelerator structures and RF components

    International Nuclear Information System (INIS)

    Ko, K., Ng, C.K.; Herrmannsfeldt, W.B.

    1993-03-01

    Computer modeling has become an integral part of the design and analysis of accelerator structures RF components. Sophisticated 3D codes, powerful workstations and timely theory support all contributed to this development. We will describe our modeling experience with these resources and discuss their impact on ongoing work at SLAC. Specific examples from R ampersand D on a future linear collide and a proposed e + e - storage ring will be included

  10. Modeling recall memory for emotional objects in Alzheimer's disease.

    Science.gov (United States)

    Sundstrøm, Martin

    2011-07-01

    To examine whether emotional memory (EM) of objects with self-reference in Alzheimer's disease (AD) can be modeled with binomial logistic regression in a free recall and an object recognition test to predict EM enhancement. Twenty patients with AD and twenty healthy controls were studied. Six objects (three presented as gifts) were shown to each participant. Ten minutes later, a free recall and a recognition test were applied. The recognition test had target-objects mixed with six similar distracter objects. Participants were asked to name any object in the recall test and identify each object in the recognition test as known or unknown. The total of gift objects recalled in AD patients (41.6%) was larger than neutral objects (13.3%) and a significant EM recall effect for gifts was found (Wilcoxon: p recall and recognition but showed no EM enhancement due to a ceiling effect. A logistic regression showed that likelihood of emotional recall memory can be modeled as a function of MMSE score (p Recall memory was enhanced in AD patients for emotional objects indicating that EM in mild to moderate AD although impaired can be provoked with strong emotional load. The logistic regression model suggests that EM declines with the progression of AD rather than disrupts and may be a useful tool for evaluating magnitude of emotional load.

  11. Design and Application of an Ontology for Component-Based Modeling of Water Systems

    Science.gov (United States)

    Elag, M.; Goodall, J. L.

    2012-12-01

    Many Earth system modeling frameworks have adopted an approach of componentizing models so that a large model can be assembled by linking a set of smaller model components. These model components can then be more easily reused, extended, and maintained by a large group of model developers and end users. While there has been a notable increase in component-based model frameworks in the Earth sciences in recent years, there has been less work on creating framework-agnostic metadata and ontologies for model components. Well defined model component metadata is needed, however, to facilitate sharing, reuse, and interoperability both within and across Earth system modeling frameworks. To address this need, we have designed an ontology for the water resources community named the Water Resources Component (WRC) ontology in order to advance the application of component-based modeling frameworks across water related disciplines. Here we present the design of the WRC ontology and demonstrate its application for integration of model components used in watershed management. First we show how the watershed modeling system Soil and Water Assessment Tool (SWAT) can be decomposed into a set of hydrological and ecological components that adopt the Open Modeling Interface (OpenMI) standard. Then we show how the components can be used to estimate nitrogen losses from land to surface water for the Baltimore Ecosystem study area. Results of this work are (i) a demonstration of how the WRC ontology advances the conceptual integration between components of water related disciplines by handling the semantic and syntactic heterogeneity present when describing components from different disciplines and (ii) an investigation of a methodology by which large models can be decomposed into a set of model components that can be well described by populating metadata according to the WRC ontology.

  12. The internal/external issue what is an outer object? Another person as object and as separate other in object relations models.

    Science.gov (United States)

    Zachrisson, Anders

    2013-01-01

    The question of what we mean by the term outer object has its roots in the epistemological foundation of psychoanalysis. From the very beginning, Freud's view was Kantian, and psychoanalysis has kept that stance, as it seems. The author reviews the internal/external issue in Freud's thinking and in the central object relations theories (Klein, Winnicott, and Bion). On this background he proposes a simple model to differentiate the concept of object along one central dimension: internal object, external object, and actual person. The main arguments are: (1) there is no direct, unmediated perception of the actual person--the experience of the other is always affected by the perceiver's subjectivity; (2) in intense transference reactions and projections, the perception of the person is dominated by the qualities of an inner object--and the other person "becomes" an external object for the perceiver; (3) when this distortion is less dominating, the other person to a higher degree remains a separate other--a person in his or her own right. Clinical material illustrates these phenomena, and a graphical picture of the model is presented. Finally with the model as background, the author comments on a selection of phenomena and concepts such as unobjectionable transference, "the third position," mourning and loneliness. The way that the internal colours and distorts the external is of course a central preoccupation of psychoanalysis generally. (Spillius et al., 2011, p. 326)

  13. Object-oriented simulation for the Superconducting Super Collider

    International Nuclear Information System (INIS)

    Zhou, Jiasheng; Chung, Moon-Jung

    1992-10-01

    This paper describes the design and implementation of an object-oriented simulation environment called OZ for the Superconducting Super Collider (SSC). The design applies object-oriented technology to data visualization, behavior modelling, dynamic simulation and version control. A meta class structure is proposed to model different types of objects in large systems by their functionality. OZ provides a direct-manipulation user interface which allows the user to visualize the data as an object in the database and interactively model the component of the system. Modelling can be exercised at different levels of the class hierarchy and then can be dynamically bound into a system for simulation. Inheritance is used to derive new configurations of the system or subsystem from the existing one, and specify an object's behavior. Delegation is used to construct a system by instantiating existing objects and ''stealing'' their methods by delegators

  14. Experiment planning using high-level component models at W7-X

    International Nuclear Information System (INIS)

    Lewerentz, Marc; Spring, Anett; Bluhm, Torsten; Heimann, Peter; Hennig, Christine; Kühner, Georg; Kroiss, Hugo; Krom, Johannes G.; Laqua, Heike; Maier, Josef; Riemann, Heike; Schacht, Jörg; Werner, Andreas; Zilker, Manfred

    2012-01-01

    Highlights: ► Introduction of models for an abstract description of fusion experiments. ► Component models support creating feasible experiment programs at planning time. ► Component models contain knowledge about physical and technical constraints. ► Generated views on models allow to present crucial information. - Abstract: The superconducting stellarator Wendelstein 7-X (W7-X) is a fusion device, which is capable of steady state operation. Furthermore W7-X is a very complex technical system. To cope with these requirements a modular and strongly hierarchical component-based control and data acquisition system has been designed. The behavior of W7-X is characterized by thousands of technical parameters of the participating components. The intended sequential change of those parameters during an experiment is defined in an experiment program. Planning such an experiment program is a crucial and complex task. To reduce the complexity an abstract, more physics-oriented high-level layer has been introduced earlier. The so-called high-level (physics) parameters are used to encapsulate technical details. This contribution will focus on the extension of this layer to a high-level component model. It completely describes the behavior of a component for a certain period of time. It allows not only defining simple value ranges but also complex dependencies between physics parameters. This can be: dependencies within components, dependencies between components or temporal dependencies. Component models can now be analyzed to generate various views of an experiment. A first implementation of such an analyze process is already finished. A graphical preview of a planned discharge can be generated from a chronological sequence of component models. This allows physicists to survey complex planned experiment programs at a glance.

  15. Component-Based Modelling for Scalable Smart City Systems Interoperability: A Case Study on Integrating Energy Demand Response Systems.

    Science.gov (United States)

    Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan

    2016-10-28

    Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems' architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation.

  16. Turbulence and Self-Organization Modeling Astrophysical Objects

    CERN Document Server

    Marov, Mikhail Ya

    2013-01-01

    This book focuses on the development of continuum models of natural turbulent media. It provides a theoretical approach to the solutions of different problems related to the formation, structure and evolution of astrophysical and geophysical objects. A stochastic modeling approach is used in the mathematical treatment of these problems, which reflects self-organization processes in open dissipative systems. The authors also consider examples of ordering for various objects in space throughout their evolutionary processes. This volume is aimed at graduate students and researchers in the fields of mechanics, astrophysics, geophysics, planetary and space science.

  17. Constructing Multidatabase Collections Using Extended ODMG Object Model

    Directory of Open Access Journals (Sweden)

    Adrian Skehill Mark Roantree

    1999-11-01

    Full Text Available Collections are an important feature in database systems. They provide us with the ability to group objects of interest together, and then to manipulate them in the required fashion. The OASIS project is focused on the construction a multidatabase prototype which uses the ODMG model and a canonical model. As part of this work we have extended the base model to provide a more powerful collection mechanism, and to permit the construction of a federated collection, a collection of heterogenous objects taken from distributed data sources

  18. BWR Refill-Reflood Program, Task 4.7 - model development: TRAC-BWR component models

    International Nuclear Information System (INIS)

    Cheung, Y.K.; Parameswaran, V.; Shaug, J.C.

    1983-09-01

    TRAC (Transient Reactor Analysis Code) is a computer code for best-estimate analysis for the thermal hydraulic conditions in a reactor system. The development and assessment of the BWR component models developed under the Refill/Reflood Program that are necessary to structure a BWR-version of TRAC are described in this report. These component models are the jet pump, steam separator, steam dryer, two-phase level tracking model, and upper-plenum mixing model. These models have been implemented into TRAC-B02. Also a single-channel option has been developed for individual fuel-channel analysis following a system-response calculation

  19. Detailed finite element method modeling of evaporating multi-component droplets

    Energy Technology Data Exchange (ETDEWEB)

    Diddens, Christian, E-mail: C.Diddens@tue.nl

    2017-07-01

    The evaporation of sessile multi-component droplets is modeled with an axisymmetic finite element method. The model comprises the coupled processes of mixture evaporation, multi-component flow with composition-dependent fluid properties and thermal effects. Based on representative examples of water–glycerol and water–ethanol droplets, regular and chaotic examples of solutal Marangoni flows are discussed. Furthermore, the relevance of the substrate thickness for the evaporative cooling of volatile binary mixture droplets is pointed out. It is shown how the evaporation of the more volatile component can drastically decrease the interface temperature, so that ambient vapor of the less volatile component condenses on the droplet. Finally, results of this model are compared with corresponding results of a lubrication theory model, showing that the application of lubrication theory can cause considerable errors even for moderate contact angles of 40°. - Graphical abstract:.

  20. Latent semantics as cognitive components

    DEFF Research Database (Denmark)

    Petersen, Michael Kai; Mørup, Morten; Hansen, Lars Kai

    2010-01-01

    Cognitive component analysis, defined as an unsupervised learning of features resembling human comprehension, suggests that the sensory structures we perceive might often be modeled by reducing dimensionality and treating objects in space and time as linear mixtures incorporating sparsity...... emotional responses can be encoded in words, we propose a simplified cognitive approach to model how we perceive media. Representing song lyrics in a vector space of reduced dimensionality using LSA, we combine bottom-up defined term distances with affective adjectives, that top-down constrain the latent......, which we suggest might function as cognitive components for perceiving the underlying structure in lyrics....

  1. Uncertain and multi-objective programming models for crop planting structure optimization

    Directory of Open Access Journals (Sweden)

    Mo LI,Ping GUO,Liudong ZHANG,Chenglong ZHANG

    2016-03-01

    Full Text Available Crop planting structure optimization is a significant way to increase agricultural economic benefits and improve agricultural water management. The complexities of fluctuating stream conditions, varying economic profits, and uncertainties and errors in estimated modeling parameters, as well as the complexities among economic, social, natural resources and environmental aspects, have led to the necessity of developing optimization models for crop planting structure which consider uncertainty and multi-objectives elements. In this study, three single-objective programming models under uncertainty for crop planting structure optimization were developed, including an interval linear programming model, an inexact fuzzy chance-constrained programming (IFCCP model and an inexact fuzzy linear programming (IFLP model. Each of the three models takes grayness into account. Moreover, the IFCCP model considers fuzzy uncertainty of parameters/variables and stochastic characteristics of constraints, while the IFLP model takes into account the fuzzy uncertainty of both constraints and objective functions. To satisfy the sustainable development of crop planting structure planning, a fuzzy-optimization-theory-based fuzzy linear multi-objective programming model was developed, which is capable of reflecting both uncertainties and multi-objective. In addition, a multi-objective fractional programming model for crop structure optimization was also developed to quantitatively express the multi-objective in one optimization model with the numerator representing maximum economic benefits and the denominator representing minimum crop planting area allocation. These models better reflect actual situations, considering the uncertainties and multi-objectives of crop planting structure optimization systems. The five models developed were then applied to a real case study in Minqin County, north-west China. The advantages, the applicable conditions and the solution methods

  2. A Dynamic Object Behavior Model and Implementation Based on Computational Reflection

    Institute of Scientific and Technical Information of China (English)

    HE Cheng-wan; HE Fei; HE Ke-qing

    2005-01-01

    A dynamic object behavior model based on computational reflection is proposed. This model consists of function level and meta level, the meta objects in meta level manage the base objects and behaviors in function level, including dynamic binding and unbinding of base object and behavior.We implement this model with RoleJava Language, which is our self linguistic extension of the Java Language. Meta Objects are generated automatically at compile-time, this makes the reflecton mechanism transparent to programmers. Finally an example applying this model to a banking system is presented.

  3. Oscillating shells: A model for a variable cosmic object

    OpenAIRE

    Nunez, Dario

    1997-01-01

    A model for a possible variable cosmic object is presented. The model consists of a massive shell surrounding a compact object. The gravitational and self-gravitational forces tend to collapse the shell, but the internal tangential stresses oppose the collapse. The combined action of the two types of forces is studied and several cases are presented. In particular, we investigate the spherically symmetric case in which the shell oscillates radially around a central compact object.

  4. A bi-objective model for optimizing replacement time of age and block policies with consideration of spare parts’ availability

    Science.gov (United States)

    Alsyouf, Imad

    2018-05-01

    Reliability and availability of critical systems play an important role in achieving the stated objectives of engineering assets. Preventive replacement time affects the reliability of the components, thus the number of system failures encountered and its downtime expenses. On the other hand, spare parts inventory level is a very critical factor that affects the availability of the system. Usually, the decision maker has many conflicting objectives that should be considered simultaneously for the selection of the optimal maintenance policy. The purpose of this research was to develop a bi-objective model that will be used to determine the preventive replacement time for three maintenance policies (age, block good as new, block bad as old) with consideration of spare parts’ availability. It was suggested to use a weighted comprehensive criterion method with two objectives, i.e. cost and availability. The model was tested with a typical numerical example. The results of the model demonstrated its effectiveness in enabling the decision maker to select the optimal maintenance policy under different scenarios and taking into account preferences with respect to contradicting objectives such as cost and availability.

  5. Probabilistic object and viewpoint models for active object recognition

    CSIR Research Space (South Africa)

    Govender, N

    2013-09-01

    Full Text Available ,θ′(f occ). V. EXPERIMENTS A. Dataset For our experiments, we use the active recognition dataset introduced by [12]. The training data consists of everyday objects such as cereal boxes, ornaments, spice bottle, etc. Images were captured every 20 degrees... are to be verified TABLE I CONFUSION MATRIX FOR BINARY A MODEL Obscured Obscured Obscured Obscured Obscured Obscured Obscured Obscured Obscured Obscured Cereal Battery Curry box Elephant Handbag MrMin Salad Bottle Spice Bottle Spray Can Spray Can 1 Cereal 0.9800 0...

  6. School lunch program in India: background, objectives and components.

    Science.gov (United States)

    Chutani, Alka Mohan

    2012-01-01

    The School Lunch Program in India (SLP) is the largest food and nutrition assistance program feeding millions of children every day. This paper provides a review of the background information on the SLP in India earlier known as national program for nutrition support to primary education (NP-NSPE) and later as mid day meal scheme, including historical trends and objectives and components/characteristics of the scheme. It also addresses steps being taken to meet challenges being faced by the administrators of the program in monitoring and evaluation of the program. This program was initially started in 1960 in few states to overcome the complex problems malnutrition and illiteracy. Mid Day Meal Scheme is the popular name for school meal program. In 2001, as per the supreme court orders, it became mandatory to give a mid day meal to all primary and later extended to upper primary school children studying in the government and government aided schools. This scheme benefitted 140 million children in government assisted schools across India in 2008, strengthening child nutrition and literacy. In a country with a large percent of illiterate population with a high percent of children unable to read or write; governmental and non-governmental organizations have reported that mid day meal scheme has consistently increased enrollment in schools in India. One of the main goals of school lunch program is to promote the health and well-being of the Nation's children.

  7. Model validation and calibration based on component functions of model output

    International Nuclear Information System (INIS)

    Wu, Danqing; Lu, Zhenzhou; Wang, Yanping; Cheng, Lei

    2015-01-01

    The target in this work is to validate the component functions of model output between physical observation and computational model with the area metric. Based on the theory of high dimensional model representations (HDMR) of independent input variables, conditional expectations are component functions of model output, and the conditional expectations reflect partial information of model output. Therefore, the model validation of conditional expectations tells the discrepancy between the partial information of the computational model output and that of the observations. Then a calibration of the conditional expectations is carried out to reduce the value of model validation metric. After that, a recalculation of the model validation metric of model output is taken with the calibrated model parameters, and the result shows that a reduction of the discrepancy in the conditional expectations can help decrease the difference in model output. At last, several examples are employed to demonstrate the rationality and necessity of the methodology in case of both single validation site and multiple validation sites. - Highlights: • A validation metric of conditional expectations of model output is proposed. • HDRM explains the relationship of conditional expectations and model output. • An improved approach of parameter calibration updates the computational models. • Validation and calibration process are applied at single site and multiple sites. • Validation and calibration process show a superiority than existing methods

  8. Towards C++ object libraries for accelerator physics

    International Nuclear Information System (INIS)

    Michelotti, L.

    1992-01-01

    This paper concerns creation of libraries of reusable objects in the language C ++ for doing accelerator design and analysis. The C ++ language possesses features which lend themselves to writing portable, scientific solftware. The two libraries of C ++ classes (objects) which have been under development are (1) MXYZPTLK, which implements automatic differentiation and (2) BEAMLINE, which provides objects for modeling beam line and accelerator components. A description of the principle classes in these two libraries is presented

  9. Time-dependent inhomogeneous jet models for BL Lac objects

    Science.gov (United States)

    Marlowe, A. T.; Urry, C. M.; George, I. M.

    1992-05-01

    Relativistic beaming can explain many of the observed properties of BL Lac objects (e.g., rapid variability, high polarization, etc.). In particular, the broadband radio through X-ray spectra are well modeled by synchrotron-self Compton emission from an inhomogeneous relativistic jet. We have done a uniform analysis on several BL Lac objects using a simple but plausible inhomogeneous jet model. For all objects, we found that the assumed power-law distribution of the magnetic field and the electron density can be adjusted to match the observed BL Lac spectrum. While such models are typically unconstrained, consideration of spectral variability strongly restricts the allowed parameters, although to date the sampling has generally been too sparse to constrain the current models effectively. We investigate the time evolution of the inhomogeneous jet model for a simple perturbation propagating along the jet. The implications of this time evolution model and its relevance to observed data are discussed.

  10. A PDP model of the simultaneous perception of multiple objects

    Science.gov (United States)

    Henderson, Cynthia M.; McClelland, James L.

    2011-06-01

    Illusory conjunctions in normal and simultanagnosic subjects are two instances where the visual features of multiple objects are incorrectly 'bound' together. A connectionist model explores how multiple objects could be perceived correctly in normal subjects given sufficient time, but could give rise to illusory conjunctions with damage or time pressure. In this model, perception of two objects benefits from lateral connections between hidden layers modelling aspects of the ventral and dorsal visual pathways. As with simultanagnosia, simulations of dorsal lesions impair multi-object recognition. In contrast, a large ventral lesion has minimal effect on dorsal functioning, akin to dissociations between simple object manipulation (retained in visual form agnosia and semantic dementia) and object discrimination (impaired in these disorders) [Hodges, J.R., Bozeat, S., Lambon Ralph, M.A., Patterson, K., and Spatt, J. (2000), 'The Role of Conceptual Knowledge: Evidence from Semantic Dementia', Brain, 123, 1913-1925; Milner, A.D., and Goodale, M.A. (2006), The Visual Brain in Action (2nd ed.), New York: Oxford]. It is hoped that the functioning of this model might suggest potential processes underlying dorsal and ventral contributions to the correct perception of multiple objects.

  11. Model-based object classification using unification grammars and abstract representations

    Science.gov (United States)

    Liburdy, Kathleen A.; Schalkoff, Robert J.

    1993-04-01

    The design and implementation of a high level computer vision system which performs object classification is described. General object labelling and functional analysis require models of classes which display a wide range of geometric variations. A large representational gap exists between abstract criteria such as `graspable' and current geometric image descriptions. The vision system developed and described in this work addresses this problem and implements solutions based on a fusion of semantics, unification, and formal language theory. Object models are represented using unification grammars, which provide a framework for the integration of structure and semantics. A methodology for the derivation of symbolic image descriptions capable of interacting with the grammar-based models is described and implemented. A unification-based parser developed for this system achieves object classification by determining if the symbolic image description can be unified with the abstract criteria of an object model. Future research directions are indicated.

  12. Visual object recognition and category-specificity

    DEFF Research Database (Denmark)

    Gerlach, Christian

    This thesis is based on seven published papers. The majority of the papers address two topics in visual object recognition: (i) category-effects at pre-semantic stages, and (ii) the integration of visual elements into elaborate shape descriptions corresponding to whole objects or large object parts...... (shape configuration). In the early writings these two topics were examined more or less independently. In later works, findings concerning category-effects and shape configuration merge into an integrated model, termed RACE, advanced to explain category-effects arising at pre-semantic stages in visual...... in visual long-term memory. In the thesis it is described how this simple model can account for a wide range of findings on category-specificity in both patients with brain damage and normal subjects. Finally, two hypotheses regarding the neural substrates of the model's components - and how activation...

  13. Multi-objective possibilistic model for portfolio selection with transaction cost

    Science.gov (United States)

    Jana, P.; Roy, T. K.; Mazumder, S. K.

    2009-06-01

    In this paper, we introduce the possibilistic mean value and variance of continuous distribution, rather than probability distributions. We propose a multi-objective Portfolio based model and added another entropy objective function to generate a well diversified asset portfolio within optimal asset allocation. For quantifying any potential return and risk, portfolio liquidity is taken into account and a multi-objective non-linear programming model for portfolio rebalancing with transaction cost is proposed. The models are illustrated with numerical examples.

  14. Object-Oriented Database for Managing Building Modeling Components and Metadata: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Long, N.; Fleming, K.; Brackney, L.

    2011-12-01

    Building simulation enables users to explore and evaluate multiple building designs. When tools for optimization, parametrics, and uncertainty analysis are combined with analysis engines, the sheer number of discrete simulation datasets makes it difficult to keep track of the inputs. The integrity of the input data is critical to designers, engineers, and researchers for code compliance, validation, and building commissioning long after the simulations are finished. This paper discusses an application that stores inputs needed for building energy modeling in a searchable, indexable, flexible, and scalable database to help address the problem of managing simulation input data.

  15. A multi-objective model for locating distribution centers in a supply chain network considering risk and inventory decisions

    Directory of Open Access Journals (Sweden)

    Sara Gharegozloo Hamedani

    2013-04-01

    Full Text Available This paper presents a multi-objective location problem in a three level supply chain network under uncertain environment considering inventory decisions. The proposed model of this paper considers uncertainty for different parameters including procurement, transportation costs, supply, demand and the capacity of various facilities. The proposed model presents a robust optimization model, which specifies locations of distribution centers to be opened, inventory control parameters (r, Q, and allocation of supply chain components, concurrently. The resulted mixed-integer nonlinear programming minimizes the expected total cost of such a supply chain network comprising location, procurement, transportation, holding, ordering, and shortage costs. The model also minimizes the variability of the total cost of relief chain and minimizes the financial risk or the probability of not meeting a certain budget. We use the ε-constraint method, which is a multi-objective technique with implicit trade-off information given, to solve the problem and using a couple of numerical instances, we examine the performance of the proposed approach.

  16. Knowledge-Based Topic Model for Unsupervised Object Discovery and Localization.

    Science.gov (United States)

    Niu, Zhenxing; Hua, Gang; Wang, Le; Gao, Xinbo

    Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object

  17. Conceptual Modeling of Events as Information Objects and Change Agents

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    as a totality of an information object and a change agent. When an event is modeled as an information object it is comparable to an entity that exists only at a specific point in time. It has attributes and can be used for querying and specification of constraints. When an event is modeled as a change agent...... it is comparable to an executable transaction schema. Finally, we briefly compare our approach to object-oriented approaches based on encapsulated objects....

  18. A two-component dark matter model with real singlet scalars ...

    Indian Academy of Sciences (India)

    2016-01-05

    Jan 5, 2016 ... We propose a two-component dark matter (DM) model, each component of which is a real singlet scalar, to explain results from both direct and indirect detection experiments. We put the constraints on the model parameters from theoretical bounds, PLANCK relic density results and direct DM experiments.

  19. Modeling business objects with XML schema

    CERN Document Server

    Daum, Berthold

    2003-01-01

    XML Schema is the new language standard from the W3C and the new foundation for defining data in Web-based systems. There is a wealth of information available about Schemas but very little understanding of how to use this highly formal specification for creating documents. Grasping the power of Schemas means going back to the basics of documents themselves, and the semantic rules, or grammars, that define them. Written for schema designers, system architects, programmers, and document authors, Modeling Business Objects with XML Schema guides you through understanding Schemas from the basic concepts, type systems, type derivation, inheritance, namespace handling, through advanced concepts in schema design.*Reviews basic XML syntax and the Schema recommendation in detail.*Builds a knowledge base model step by step (about jazz music) that is used throughout the book.*Discusses Schema design in large environments, best practice design patterns, and Schema''s relation to object-oriented concepts.

  20. The Effect of Multidimensional Motivation Interventions on Cognitive and Behavioral Components of Motivation: Testing Martin's Model

    OpenAIRE

    Fatemeh PooraghaRoodbarde; Siavash Talepasand; Issac Rahimian Boogar

    2017-01-01

    Objective: The present study aimed at examining the effect of multidimensional motivation interventions based on Martin's model on cognitive and behavioral components of motivation.Methods: The research design was prospective with pretest, posttest, and follow-up, and 2 experimental groups. In this study, 90 students (45 participants in the experimental group and 45 in the control group) constituted the sample of the study, and they were selected by available sampling method. Motivation inter...

  1. Fuzzy object models for newborn brain MR image segmentation

    Science.gov (United States)

    Kobashi, Syoji; Udupa, Jayaram K.

    2013-03-01

    Newborn brain MR image segmentation is a challenging problem because of variety of size, shape and MR signal although it is the fundamental study for quantitative radiology in brain MR images. Because of the large difference between the adult brain and the newborn brain, it is difficult to directly apply the conventional methods for the newborn brain. Inspired by the original fuzzy object model introduced by Udupa et al. at SPIE Medical Imaging 2011, called fuzzy shape object model (FSOM) here, this paper introduces fuzzy intensity object model (FIOM), and proposes a new image segmentation method which combines the FSOM and FIOM into fuzzy connected (FC) image segmentation. The fuzzy object models are built from training datasets in which the cerebral parenchyma is delineated by experts. After registering FSOM with the evaluating image, the proposed method roughly recognizes the cerebral parenchyma region based on a prior knowledge of location, shape, and the MR signal given by the registered FSOM and FIOM. Then, FC image segmentation delineates the cerebral parenchyma using the fuzzy object models. The proposed method has been evaluated using 9 newborn brain MR images using the leave-one-out strategy. The revised age was between -1 and 2 months. Quantitative evaluation using false positive volume fraction (FPVF) and false negative volume fraction (FNVF) has been conducted. Using the evaluation data, a FPVF of 0.75% and FNVF of 3.75% were achieved. More data collection and testing are underway.

  2. New approaches to the modelling of multi-component fuel droplet heating and evaporation

    KAUST Repository

    Sazhin, Sergei S

    2015-02-25

    The previously suggested quasi-discrete model for heating and evaporation of complex multi-component hydrocarbon fuel droplets is described. The dependence of density, viscosity, heat capacity and thermal conductivity of liquid components on carbon numbers n and temperatures is taken into account. The effects of temperature gradient and quasi-component diffusion inside droplets are taken into account. The analysis is based on the Effective Thermal Conductivity/Effective Diffusivity (ETC/ED) model. This model is applied to the analysis of Diesel and gasoline fuel droplet heating and evaporation. The components with relatively close n are replaced by quasi-components with properties calculated as average properties of the a priori defined groups of actual components. Thus the analysis of the heating and evaporation of droplets consisting of many components is replaced with the analysis of the heating and evaporation of droplets consisting of relatively few quasi-components. It is demonstrated that for Diesel and gasoline fuel droplets the predictions of the model based on five quasi-components are almost indistinguishable from the predictions of the model based on twenty quasi-components for Diesel fuel droplets and are very close to the predictions of the model based on thirteen quasi-components for gasoline fuel droplets. It is recommended that in the cases of both Diesel and gasoline spray combustion modelling, the analysis of droplet heating and evaporation is based on as little as five quasi-components.

  3. Fast Appearance Modeling for Automatic Primary Video Object Segmentation.

    Science.gov (United States)

    Yang, Jiong; Price, Brian; Shen, Xiaohui; Lin, Zhe; Yuan, Junsong

    2016-02-01

    Automatic segmentation of the primary object in a video clip is a challenging problem as there is no prior knowledge of the primary object. Most existing techniques thus adapt an iterative approach for foreground and background appearance modeling, i.e., fix the appearance model while optimizing the segmentation and fix the segmentation while optimizing the appearance model. However, these approaches may rely on good initialization and can be easily trapped in local optimal. In addition, they are usually time consuming for analyzing videos. To address these limitations, we propose a novel and efficient appearance modeling technique for automatic primary video object segmentation in the Markov random field (MRF) framework. It embeds the appearance constraint as auxiliary nodes and edges in the MRF structure, and can optimize both the segmentation and appearance model parameters simultaneously in one graph cut. The extensive experimental evaluations validate the superiority of the proposed approach over the state-of-the-art methods, in both efficiency and effectiveness.

  4. 3D object-oriented image analysis in 3D geophysical modelling

    DEFF Research Database (Denmark)

    Fadel, I.; van der Meijde, M.; Kerle, N.

    2015-01-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects......) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA......) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D...

  5. A kinetic model for impact/sliding wear of pressurized water reactor internal components: Application to rod cluster control assemblies

    International Nuclear Information System (INIS)

    Zbinden, M.

    1996-01-01

    Certain internal components of Pressurized Water Reactors are damaged by wear when subjected to vibration induced by flow. In order to enable predictive calculation of such wear, one must have a model which takes account reliably of real damages. The modelling of wear represents a final link in a succession of numerical calculations which begins by the determination of hydraulic excitations induced by the flow. One proceeds, then, in the dynamic response calculation of the structure to finish up with an estimation of volumetric wear and of the depth of wear scars. A new concept of industrial wear model adapted to components of nuclear plants is proposed. Its originality is to be supported, on one hand, by experimental results obtained via wear machines of relatively short operational times, and, on the other hand, by the information obtained from the operating feedback over real wear kinetics of the reactors components. The proposed model is illustrated by an example which correspond to a specific real situation. The determination of the coefficients permitting to cover all assembly of configurations and the validation of the model in these configurations have been the object of the most recent work

  6. A Convergent Participation Model for Evaluation of Learning Objects

    Directory of Open Access Journals (Sweden)

    John Nesbit

    2002-10-01

    Full Text Available The properties that distinguish learning objects from other forms of educational software - global accessibility, metadata standards, finer granularity and reusability - have implications for evaluation. This article proposes a convergent participation model for learning object evaluation in which representatives from stakeholder groups (e.g., students, instructors, subject matter experts, instructional designers, and media developers converge toward more similar descriptions and ratings through a two-stage process supported by online collaboration tools. The article reviews evaluation models that have been applied to educational software and media, considers models for gathering and meta-evaluating individual user reviews that have recently emerged on the Web, and describes the peer review model adopted for the MERLOT repository. The convergent participation model is assessed in relation to other models and with respect to its support for eight goals of learning object evaluation: (1 aid for searching and selecting, (2 guidance for use, (3 formative evaluation, (4 influence on design practices, (5 professional development and student learning, (6 community building, (7 social recognition, and (8 economic exchange.

  7. Option valuation with the simplified component GARCH model

    DEFF Research Database (Denmark)

    Dziubinski, Matt P.

    We introduce the Simplified Component GARCH (SC-GARCH) option pricing model, show and discuss sufficient conditions for non-negativity of the conditional variance, apply it to low-frequency and high-frequency financial data, and consider the option valuation, comparing the model performance...

  8. POMP - Pervasive Object Model Project

    DEFF Research Database (Denmark)

    Schougaard, Kari Rye; Schultz, Ulrik Pagh

    The focus on mobile devices is continuously increasing, and improved device connectivity enables the construction of pervasive computing systems composed of heterogeneous collections of devices. Users who employ different devices throughout their daily activities naturally expect their applications...... computing environment. This system, named POM (Pervasive Object Model), supports applications split into coarse-grained, strongly mobile units that communicate using method invocations through proxies. We are currently investigating efficient execution of mobile applications, scalability to suit...

  9. Behavioral models as theoretical frames to analyze the business objective

    Directory of Open Access Journals (Sweden)

    Hernán Alonso Bafico

    2015-12-01

    Full Text Available This paper examines Pfeffer’s Models of Behavior and connects each of them with attributes of the definition of the firm’s objective, assumed as the maximization of the sustainable, long term valor of the residual claims.Each of the five models of behavior (rational, social, moral, retrospective and cognitive contributes to the decision making and goal setting processes with its particular and complementary elements. From those assuming complete rationality and frictionless markets, to the models emphasizing the role of ethical positions, and the presence of perceptive and cognitive mechanisms. The analysis highlights the main contributions of critical theories and models of behavior, underlining their focus on non-traditional variables, regarded as critical inputs for goal setting processes and designing alternative executive incentive schemes.  The explicit consideration of those variables does not indicate the need for a new definition of corporate objective. The maximization of the long term value of the shareholders’ claims still defines the relevant objective function of the firm, remaining as the main yardstick of corporate performance.Behavioral models are recognized as important tools to help managers direct their attention to long term strategies. In the last part, we comment on the relationship between the objective function and behavioral models, from the practitioners’ perspective.Key words: Firm Objectives, Behavioral Models, Value Maximization, Stakeholder Theory.

  10. [Requirements imposed on model objects in microevolutionary investigations].

    Science.gov (United States)

    Mina, M V

    2015-01-01

    Extrapolation of results of investigations of a model object is justified only within the limits of a set of objects that have essential properties in common with the modal object. Which properties are essential depends on the aim of a study. Similarity of objects emerged in the process of their independent evolution does not prove similarity of ways and mechanisms of their evolution. If the objects differ in their essential properties then extrapolation of results of investigation of an object on another one is risky because it may lead to wrong decisions and, moreover, to the loss of interest to alternative hypotheses. Positions formulated above are considered with the reference to species flocks of fishes, large African Barbus in particular.

  11. Object Oriented Modelling and Dynamical Simulation

    DEFF Research Database (Denmark)

    Wagner, Falko Jens; Poulsen, Mikael Zebbelin

    1998-01-01

    This report with appendix describes the work done in master project at DTU.The goal of the project was to develop a concept for simulation of dynamical systems based on object oriented methods.The result was a library of C++-classes, for use when both building componentbased models and when...

  12. Scale modeling flow-induced vibrations of reactor components

    International Nuclear Information System (INIS)

    Mulcahy, T.M.

    1982-06-01

    Similitude relationships currently employed in the design of flow-induced vibration scale-model tests of nuclear reactor components are reviewed. Emphasis is given to understanding the origins of the similitude parameters as a basis for discussion of the inevitable distortions which occur in design verification testing of entire reactor systems and in feature testing of individual component designs for the existence of detrimental flow-induced vibration mechanisms. Distortions of similitude parameters made in current test practice are enumerated and selected example tests are described. Also, limitations in the use of specific distortions in model designs are evaluated based on the current understanding of flow-induced vibration mechanisms and structural response

  13. Models for integrated components coupled with their EM environment

    NARCIS (Netherlands)

    Ioan, D.; Schilders, W.H.A.; Ciuprina, G.; Meijs, van der N.P.; Schoenmaker, W.

    2008-01-01

    Abstract: Purpose – The main aim of this study is the modelling of the interaction of on-chip components with their electromagnetic environment. Design/methodology/approach – The integrated circuit is decomposed in passive and active components interconnected by means of terminals and connectors

  14. An intelligent dynamic simulation environment: An object-oriented approach

    International Nuclear Information System (INIS)

    Robinson, J.T.; Kisner, R.A.

    1988-01-01

    This paper presents a prototype simulation environment for nuclear power plants which illustrates the application of object-oriented programming to process simulation. Systems are modeled using this technique as a collection of objects which communicate via message passing. The environment allows users to build simulation models by selecting iconic representations of plant components from a menu and connecting them with the aid of a mouse. Models can be modified graphically at any time, even as the simulation is running, and the results observed immediately via real-time graphics. This prototype illustrates the use of object-oriented programming to create a highly interactive and automated simulation environment. 9 refs., 4 figs

  15. Pool scrubbing models for iodine components

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, K [Battelle Ingenieurtechnik GmbH, Eschborn (Germany)

    1996-12-01

    Pool scrubbing is an important mechanism to retain radioactive fission products from being carried into the containment atmosphere or into the secondary piping system. A number of models and computer codes has been developed to predict the retention of aerosols and fission product vapours that are released from the core and injected into water pools of BWR and PWR type reactors during severe accidents. Important codes in this field are BUSCA, SPARC and SUPRA. The present paper summarizes the models for scrubbing of gaseous Iodine components in these codes, discusses the experimental validation, and gives an assessment of the state of knowledge reached and the open questions which persist. The retention of gaseous Iodine components is modelled by the various codes in a very heterogeneous manner. Differences show up in the chemical species considered, the treatment of mass transfer boundary layers on the gaseous and liquid sides, the gas-liquid interface geometry, calculation of equilibrium concentrations and numerical procedures. Especially important is the determination of the pool water pH value. This value is affected by basic aerosols deposited in the water, e.g. Cesium and Rubidium compounds. A consistent model requires a mass balance of these compounds in the pool, thus effectively coupling the pool scrubbing phenomena of aerosols and gaseous Iodine species. Since the water pool conditions are also affected by drainage flow of condensate water from different regions in the containment, and desorption of dissolved gases on the pool surface is determined by the gas concentrations above the pool, some basic limitations of specialized pool scrubbing codes are given. The paper draws conclusions about the necessity of coupling between containment thermal-hydraulics and pool scrubbing models, and proposes ways of further simulation model development in order to improve source term predictions. (author) 2 tabs., refs.

  16. Pool scrubbing models for iodine components

    International Nuclear Information System (INIS)

    Fischer, K.

    1996-01-01

    Pool scrubbing is an important mechanism to retain radioactive fission products from being carried into the containment atmosphere or into the secondary piping system. A number of models and computer codes has been developed to predict the retention of aerosols and fission product vapours that are released from the core and injected into water pools of BWR and PWR type reactors during severe accidents. Important codes in this field are BUSCA, SPARC and SUPRA. The present paper summarizes the models for scrubbing of gaseous Iodine components in these codes, discusses the experimental validation, and gives an assessment of the state of knowledge reached and the open questions which persist. The retention of gaseous Iodine components is modelled by the various codes in a very heterogeneous manner. Differences show up in the chemical species considered, the treatment of mass transfer boundary layers on the gaseous and liquid sides, the gas-liquid interface geometry, calculation of equilibrium concentrations and numerical procedures. Especially important is the determination of the pool water pH value. This value is affected by basic aerosols deposited in the water, e.g. Cesium and Rubidium compounds. A consistent model requires a mass balance of these compounds in the pool, thus effectively coupling the pool scrubbing phenomena of aerosols and gaseous Iodine species. Since the water pool conditions are also affected by drainage flow of condensate water from different regions in the containment, and desorption of dissolved gases on the pool surface is determined by the gas concentrations above the pool, some basic limitations of specialized pool scrubbing codes are given. The paper draws conclusions about the necessity of coupling between containment thermal-hydraulics and pool scrubbing models, and proposes ways of further simulation model development in order to improve source term predictions. (author) 2 tabs., refs

  17. Modeling fabrication of nuclear components: An integrative approach

    Energy Technology Data Exchange (ETDEWEB)

    Hench, K.W.

    1996-08-01

    Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components in an environment of intense regulation and shrinking budgets. This dissertation presents an integrative two-stage approach to modeling the casting operation for fabrication of nuclear weapon primary components. The first stage optimizes personnel radiation exposure for the casting operation layout by modeling the operation as a facility layout problem formulated as a quadratic assignment problem. The solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units.

  18. Thermochemical modelling of multi-component systems

    International Nuclear Information System (INIS)

    Sundman, B.; Gueneau, C.

    2015-01-01

    Computational thermodynamic, also known as the Calphad method, is a standard tool in industry for the development of materials and improving processes and there is an intense scientific development of new models and databases. The calculations are based on thermodynamic models of the Gibbs energy for each phase as a function of temperature, pressure and constitution. Model parameters are stored in databases that are developed in an international scientific collaboration. In this way, consistent and reliable data for many properties like heat capacity, chemical potentials, solubilities etc. can be obtained for multi-component systems. A brief introduction to this technique is given here and references to more extensive documentation are provided. (authors)

  19. Moving objects management models, techniques and applications

    CERN Document Server

    Meng, Xiaofeng; Xu, Jiajie

    2014-01-01

    This book describes the topics of moving objects modeling and location tracking, indexing and querying, clustering, location uncertainty, traffic aware navigation and privacy issues as well as the application to intelligent transportation systems.

  20. Effectiveness of meta-models for multi-objective optimization of centrifugal impeller

    Energy Technology Data Exchange (ETDEWEB)

    Bellary, Sayed Ahmed Imran; Samad, Abdus [Indian Institute of Technology Madras, Chennai (India); Husain, Afzal [Sultan Qaboos University, Al-Khoudh (Oman)

    2014-12-15

    The major issue of multiple fidelity based analysis and optimization of fluid machinery system depends upon the proper construction of low fidelity model or meta-model. A low fidelity model uses responses obtained from a high fidelity model, and the meta-model is then used to produce population of solutions required for evolutionary algorithm for multi-objective optimization. The Pareto-optimal front which shows functional relationships among the multiple objectives can produce erroneous results if the low fidelity models are not well-constructed. In the present research, response surface approximation and Kriging meta-models were evaluated for their effectiveness for the application in the turbomachinery design and optimization. A high fidelity model such as CFD technique along with the metamodels was used to obtain Pareto-optimal front via multi-objective genetic algorithm. A centrifugal impeller has been considered as case study to find relationship between two conflicting objectives, viz., hydraulic efficiency and head. Design variables from the impeller geometry have been chosen and the responses of the objective functions were evaluated through CFD analysis. The fidelity of each metamodel has been discussed in context of their predictions in entire design space in general and near optimal region in particular. Exploitation of the multiple meta-models enhances the quality of multi-objective optimization and provides the information pertaining to fidelity of optimization model. It was observed that the Kriging meta-model was better suited for this type of problem as it involved less approximation error in the Pareto-optimal front.

  1. Effectiveness of meta-models for multi-objective optimization of centrifugal impeller

    International Nuclear Information System (INIS)

    Bellary, Sayed Ahmed Imran; Samad, Abdus; Husain, Afzal

    2014-01-01

    The major issue of multiple fidelity based analysis and optimization of fluid machinery system depends upon the proper construction of low fidelity model or meta-model. A low fidelity model uses responses obtained from a high fidelity model, and the meta-model is then used to produce population of solutions required for evolutionary algorithm for multi-objective optimization. The Pareto-optimal front which shows functional relationships among the multiple objectives can produce erroneous results if the low fidelity models are not well-constructed. In the present research, response surface approximation and Kriging meta-models were evaluated for their effectiveness for the application in the turbomachinery design and optimization. A high fidelity model such as CFD technique along with the metamodels was used to obtain Pareto-optimal front via multi-objective genetic algorithm. A centrifugal impeller has been considered as case study to find relationship between two conflicting objectives, viz., hydraulic efficiency and head. Design variables from the impeller geometry have been chosen and the responses of the objective functions were evaluated through CFD analysis. The fidelity of each metamodel has been discussed in context of their predictions in entire design space in general and near optimal region in particular. Exploitation of the multiple meta-models enhances the quality of multi-objective optimization and provides the information pertaining to fidelity of optimization model. It was observed that the Kriging meta-model was better suited for this type of problem as it involved less approximation error in the Pareto-optimal front.

  2. Seismic assessment and performance of nonstructural components affected by structural modeling

    Energy Technology Data Exchange (ETDEWEB)

    Hur, Jieun; Althoff, Eric; Sezen, Halil; Denning, Richard; Aldemir, Tunc [Ohio State University, Columbus (United States)

    2017-03-15

    Seismic probabilistic risk assessment (SPRA) requires a large number of simulations to evaluate the seismic vulnerability of structural and nonstructural components in nuclear power plants. The effect of structural modeling and analysis assumptions on dynamic analysis of 3D and simplified 2D stick models of auxiliary buildings and the attached nonstructural components is investigated. Dynamic characteristics and seismic performance of building models are also evaluated, as well as the computational accuracy of the models. The presented results provide a better understanding of the dynamic behavior and seismic performance of auxiliary buildings. The results also help to quantify the impact of uncertainties associated with modeling and analysis of simplified numerical models of structural and nonstructural components subjected to seismic shaking on the predicted seismic failure probabilities of these systems.

  3. GSMNet: A Hierarchical Graph Model for Moving Objects in Networks

    Directory of Open Access Journals (Sweden)

    Hengcai Zhang

    2017-03-01

    Full Text Available Existing data models for moving objects in networks are often limited by flexibly controlling the granularity of representing networks and the cost of location updates and do not encompass semantic information, such as traffic states, traffic restrictions and social relationships. In this paper, we aim to fill the gap of traditional network-constrained models and propose a hierarchical graph model called the Geo-Social-Moving model for moving objects in Networks (GSMNet that adopts four graph structures, RouteGraph, SegmentGraph, ObjectGraph and MoveGraph, to represent the underlying networks, trajectories and semantic information in an integrated manner. The bulk of user-defined data types and corresponding operators is proposed to handle moving objects and answer a new class of queries supporting three kinds of conditions: spatial, temporal and semantic information. Then, we develop a prototype system with the native graph database system Neo4Jto implement the proposed GSMNet model. In the experiment, we conduct the performance evaluation using simulated trajectories generated from the BerlinMOD (Berlin Moving Objects Database benchmark and compare with the mature MOD system Secondo. The results of 17 benchmark queries demonstrate that our proposed GSMNet model has strong potential to reduce time-consuming table join operations an d shows remarkable advantages with regard to representing semantic information and controlling the cost of location updates.

  4. Shifting attention from objective risk factors to patients' self-assessed health resources: a clinical model for general practice.

    Science.gov (United States)

    Hollnagel, H; Malterud, K

    1995-12-01

    The study was designed to present and apply theoretical and empirical knowledge for the construction of a clinical model intended to shift the attention of the general practitioner from objective risk factors to self-assessed health resources in male and female patients. Review, discussion and analysis of selected theoretical models about personal health resources involving assessing existing theories according to their emphasis concerning self-assessed vs. doctor-assessed health resources, specific health resources vs. life and coping in general, abstract vs. clinically applicable theory, gender perspective explicitly included or not. Relevant theoretical models on health and coping (salutogenesis, coping and social support, control/demand, locus of control, health belief model, quality of life), and the perspective of the underprivileged Other (critical theory, feminist standpoint theory, the patient-centred clinical method) were presented and assessed. Components from Antonovsky's salutogenetic perspective and McWhinney's patient-centred clinical method, supported by gender perspectives, were integrated to a clinical model which is presented. General practitioners are recommended to shift their attention from objective risk factors to self-assessed health resources by means of the clinical model. The relevance and feasibility of the model should be explored in empirical research.

  5. Farm Planning by Fuzzy Multi Objective Programming Model

    Directory of Open Access Journals (Sweden)

    m Raei Jadidi

    2010-05-01

    Full Text Available In current study, Fuzzy Goal Programming (FGP model by considering a set of social and economic goals, was applied to optimal land allocation in Koshksaray district, Marand city, East Azarbaijan province, Iran. Farmer goals including total cultivable area, factor of production, production levels of various crops and total expected profit were considered fuzzily in establishment of the model. The goals were considered by 16 scenarios in the form of single objective, compound and priority structures. Results showed that, cost minimization in single objective and compound scenario is the best as compared with current conditions. In priority structure, scenario 10 with priorities of profit maximization, cost minimization, satisfying of production goals considering cost minimization and production goals, and scenario 13 with priorities of profit maximization, satisfying factor of production goals, cost minimization and fulfilling production goals, had minimum Euclidean Distance and satisfied the fuzzy objectives. Moreover, dry barley, irrigated and dry wheat and irrigated barely had maximum and minimum cultivated area, respectively. According to the findings, by reallocation of resources, farmers can achieve their better goals and objectives.

  6. Polarization burst in the BL Lac object AO 0235 + 164

    Energy Technology Data Exchange (ETDEWEB)

    Impey, C D; Brand, P W.J.L. [Edinburgh Univ. (UK). Dept. of Astronomy; Tapia, S [Steward Observatory, Tucson, AZ (USA)

    1982-01-01

    Simultaneous infrared and optical polarimetry and photometry have been obtained for AO 0235 + 164 covering a five night period. The object underwent a polarization burst during which the 2.2 ..mu..m polarization rose from 17.5 to 28.7 per cent and fell again to 14.9 per cent. At its peak the degree of optical polarization was 43.9 per cent, the highest linear polarization observed in a BL Lac object. The data show the degree of polarization to increase towards shorter wavelengths, and the effect is inconsistent with either dilution by a galactic component or simple one-component synchrotron models. The large changes in polarization are not accompanied by large changes in flux, a result which is difficult to explain using conventional models of these objects. Other implications of the luminosity, polarization and variability are discussed.

  7. Integrated Model of Balanced Score Card and Technology Component Measurement: A Strategic Perspective in Indonesia Biofuel Engineering Development

    Directory of Open Access Journals (Sweden)

    Sukardi Sukardi

    2010-08-01

    Full Text Available The development of biofuel as an ecofriendly energy alternative has a value chain problem in alignment policies between related parties. Identifiying its alignment, we make a strategic mapping by building integrated base scorecard, so the strategic target in the subsequent perspective layer can be developed more realistically. Structural Equation Modeling (SEM modeling was used to examine horizontal connection validity to show strong relation between objectives strategy, and it will be measured of constructed component on the internal process by Technology Coefficient Contribution indexes.

  8. X-ray studies of BL Lacertae objects

    International Nuclear Information System (INIS)

    Madejski, G.M.

    1986-01-01

    This thesis presents spectral x-ray data for BL Lac objects observed by the IPC and MPC aboard the Einstein Observatory and interprets that data in a context of their overall radiation spectra using synchrotron and synchrotron self-Compton models. The objects considered are: OJ 287, PKS 0735 + 178, I Zw 186, PKS 0548-322, Mkn 180, BL Lacertae, PKS 2155-304, H 0414-009 and H 0323 + 022. X-ray spectra of BL Lac objects are well described by a power law model with a low energy cutoff due to absorption within the own Galaxy. The best fit values of the energy spectral index α in the IPC (0.2-4.0 keV) band range from 0.73 to 2.35, with a mean of 1.2 and rms spread of 0.51. No single, universal index can fit the spectra of all objects. For all objects except PKS 0735 + 178, the x-ray spectrum is an extrapolation of the infrared/optical UV spectrum; in PKS 0735 + 178, the x-ray spectrum lies significantly below such an extrapolation. The overall electromagnetic distribution in those objects is interpreted as arising due to the synchrotron process in at least two spatial regions, with sizes respectively ∼10 18 cm for the radio component and ∼10 16 cm for the optical component. In objects where the x-ray spectrum lies on the extrapolation of the infrared-optical-ultraviolet spectrum, the x-ray emission is interpreted also to be due to the synchrotron process

  9. Superluminal motion of extragalactic objects

    Energy Technology Data Exchange (ETDEWEB)

    Matveenko, L.I. (AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij)

    1983-07-01

    Extragalactic objects with active nuclei are reviewed. Experimental data are obtained with the method of superfar radiointerferometry. The main peculiarities of the complex structure of Seyfert galaxies, quasars and lacertae objects are considered: the distribution of radiobrightness, spectra, alteration of the density of radiation flux and the distance between the components of sources. The superluminal velocities of component divergence observed are explained by different reasons: fast motion of components considerable difference of the Hubble component or non-cosmologic nature of the red shift of objects, effect of echoreflection of radiation, gravitation lens, systematic alteration of the optical thickness of the object, synchronouys radiation of electrons in the dipole magnetic field, as well as different kinematic illusions connected with the final time of signal propagation.

  10. Modeling the Impact of Space Suit Components and Anthropometry on the Center of Mass of a Seated Crewmember

    Science.gov (United States)

    Blackledge, Christopher; Margerum, Sarah; Ferrer, Mike; Morency, Richard; Rajulu, Sudhakar

    2010-01-01

    The Crew Impact Attenuation System (CIAS) is the energy-absorbing strut concept that dampens Orion Crew Exploration Vehicle (CEV) landing loads to levels sustainable by the crew. Significant COM variations across suited crew configurations would amplify the inertial effects of the pallet and potentially create unacceptable crew loading during launch and landing. The objective of this study was to obtain data needed for dynamic simulation models by quantifying the effects of posture, suit components, and the expected range of anthropometry on the COM of a seated individual. Several elements are required for the COM calculation of a suited human in a seated position: anthropometry, body segment mass, suit component mass, suit component location relative to the body, and joint angles defining the seated posture. Three-dimensional (3D) human body models, suit mass data, and vector calculus were utilized to compute the COM positions for 12 boundary manikins in two different seated postures. The analysis focused on two objectives: (1) quantify how much the wholebody COM varied from the smallest to largest subject and (2) quantify the effects of the suit components on the overall COM in each seat configuration. The location of the anterior-posterior COM varied across all boundary manikins by about 7 cm, and the vertical COM varied by approximately 9 to 10 cm. The mediolateral COM varied by 1.2 cm from the midline sagittal plane for both seat configurations. The suit components caused an anterior shift of the total COM by approximately 2 cm and a shift to the right along the mediolateral axis of 0.4 cm for both seat configurations. When the seat configuration was in the standard posture the suited vertical COM shifted inferiorly by as much as 1 cm, whereas in the CEV posture the vertical COM had no appreciable change. These general differences were due to the high proportion of suit mass located in the boots and lower legs and their corresponding distance from the body COM

  11. A Component-Based Modeling and Validation Method for PLC Systems

    Directory of Open Access Journals (Sweden)

    Rui Wang

    2014-05-01

    Full Text Available Programmable logic controllers (PLCs are complex embedded systems that are widely used in industry. This paper presents a component-based modeling and validation method for PLC systems using the behavior-interaction-priority (BIP framework. We designed a general system architecture and a component library for a type of device control system. The control software and hardware of the environment were all modeled as BIP components. System requirements were formalized as monitors. Simulation was carried out to validate the system model. A realistic example from industry of the gates control system was employed to illustrate our strategies. We found a couple of design errors during the simulation, which helped us to improve the dependability of the original systems. The results of experiment demonstrated the effectiveness of our approach.

  12. NPA4K development system using object-oriented methodology

    International Nuclear Information System (INIS)

    Jeong, Kwang Seong; Hahn, Do Hee

    2000-11-01

    NPA4K consists of module programs with several components for various functions. Software components have to be developed systematically by compartment criteria and design method. In this paper, the understandings of a typical Object-Oriented Methodology , UML(Unified Modeling Language), the procedure for NPA4K program development and the architecture for long-term development of NPA4K are introduced

  13. NPA4K development system using object-oriented methodology

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kwang Seong; Hahn, Do Hee

    2000-11-01

    NPA4K consists of module programs with several components for various functions. Software components have to be developed systematically by compartment criteria and design method. In this paper, the understandings of a typical Object-Oriented Methodology , UML(Unified Modeling Language), the procedure for NPA4K program development and the architecture for long-term development of NPA4K are introduced.

  14. Sound Synthesis of Objects Swinging through Air Using Physical Models

    Directory of Open Access Journals (Sweden)

    Rod Selfridge

    2017-11-01

    Full Text Available A real-time physically-derived sound synthesis model is presented that replicates the sounds generated as an object swings through the air. Equations obtained from fluid dynamics are used to determine the sounds generated while exposing practical parameters for a user or game engine to vary. Listening tests reveal that for the majority of objects modelled, participants rated the sounds from our model as plausible as actual recordings. The sword sound effect performed worse than others, and it is speculated that one cause may be linked to the difference between expectations of a sound and the actual sound for a given object.

  15. Agent And Component Object Framework For Concept Design Modeling Of Mobile Cyber Physical Systems

    Science.gov (United States)

    2018-03-01

    base design, service-oriented architecture (SOA) and enterprise architecture , brought a new emphasis on business processes and business organization...there are some useful concepts that can be leveraged into an MIGVS architecture . The concept of modeling operational or business behavior logic as...Design 1. Explicit meta model for architecture concepts and relationships 2. Support business or operational modeling and associated events 3

  16. PHYSICAL OBJECT-ORIENTED MODELING IN DEVELOPMENT OF INDIVIDUALIZED TEACHING AND ORGANIZATION OF MINI-RESEARCH IN MECHANICS COURSES

    Directory of Open Access Journals (Sweden)

    Alexander S. Chirtsov

    2017-03-01

    Full Text Available Subject of Research. The paper presents a relatively simple method to develop interactive computer models of physical systems without computer programming skills or automatic generation of the numerical computer code for the complex physical systems. Developed computer models are available over the Internet for educational purposes and can be edited by users in an unlimited number of possibilities. An applicability of computer simulations for the massive open individualized teaching and an organization of undergraduate research are also discussed. Method. The presented approach employs an original physical object-oriented modeling method, which is an extension of object-oriented programming ideas to tasks of developing simulations of the complex physical systems. In this framework, a computer model of the physical system is constructed as a set of interconnected computer objects simulating the system components: particles and fields. Interactions between the system components are described by self-adapting algorithms that are specified during the model initiation stage and are set according to either the classical or relativistic approach. The utilized technique requires neither a priori knowledge regarding an evolution of the physical system nor a formulation of differential equations describing the physical system. Main Results. Testing of the numerical implementation and an accuracy of the algorithms was performed with the use of benchmarks with the known analytical solutions. The developed method - a physical reality constructor - has provided an opportunity to assemble a series of computer models to demonstrate physical phenomena studied in the high school and university mechanic courses. More than 150 original interactive models were included into the collections of multi-level multimedia resources to support teaching of the mechanics. The physical reality constructor was successfully tested to serve as a test bed for the independent

  17. Data and information needs for WPP testing and component modeling

    International Nuclear Information System (INIS)

    Kuhn, W.L.

    1987-01-01

    The modeling task of the Waste Package Program (WPP) is to develop conceptual models that describe the interactions of waste package components with their environment and the interactions among waste package components. The task includes development and maintenance of a database of experimental data, and statistical analyses to fit model coefficients, test the significance of the fits, and propose experimental designs. The modeling task collaborates with experimentalists to apply physicochemical principles to develop the conceptual models, with emphasis on the subsequent mathematical development. The reason for including the modeling task in the predominantly experimental WPP is to keep the modeling of component behavior closely associated with the experimentation. Whenever possible, waste package degradation processes are described in terms of chemical reactions or transport processes. The integration of equations for assumed or calculated repository conditions predicts variations with time in the repository. Within the context of the waste package program, the composition and rate of arrival of brine to the waste package are environmental variables. These define the environment to be simulated or explored during waste package component and interactions testing. The containment period is characterized by rapid changes in temperature, pressure, oxygen fugacity, and salt porosity. Brine migration is expected to be most rapid during this period. The release period is characterized by modest and slowly changing temperatures, high pressure, low oxygen fugacity, and low porosity. The need is to define the scenario within which waste package degradation calculations are to be made and to quantify the rate of arrival and composition of the brine. Appendix contains 4 vugraphs

  18. Moving object detection using dynamic motion modelling from UAV aerial images.

    Science.gov (United States)

    Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid

    2014-01-01

    Motion analysis based moving object detection from UAV aerial image is still an unsolved issue due to inconsideration of proper motion estimation. Existing moving object detection approaches from UAV aerial images did not deal with motion based pixel intensity measurement to detect moving object robustly. Besides current research on moving object detection from UAV aerial images mostly depends on either frame difference or segmentation approach separately. There are two main purposes for this research: firstly to develop a new motion model called DMM (dynamic motion model) and secondly to apply the proposed segmentation approach SUED (segmentation using edge based dilation) using frame difference embedded together with DMM model. The proposed DMM model provides effective search windows based on the highest pixel intensity to segment only specific area for moving object rather than searching the whole area of the frame using SUED. At each stage of the proposed scheme, experimental fusion of the DMM and SUED produces extracted moving objects faithfully. Experimental result reveals that the proposed DMM and SUED have successfully demonstrated the validity of the proposed methodology.

  19. Balance between calibration objectives in a conceptual hydrological model

    NARCIS (Netherlands)

    Booij, Martijn J.; Krol, Martinus S.

    2010-01-01

    Three different measures to determine the optimum balance between calibration objectives are compared: the combined rank method, parameter identifiability and model validation. Four objectives (water balance, hydrograph shape, high flows, low flows) are included in each measure. The contributions of

  20. Application of object modeling technique to medical image retrieval system

    International Nuclear Information System (INIS)

    Teshima, Fumiaki; Abe, Takeshi

    1993-01-01

    This report describes the results of discussions on the object-oriented analysis methodology, which is one of the object-oriented paradigms. In particular, we considered application of the object modeling technique (OMT) to the analysis of a medical image retrieval system. The object-oriented methodology places emphasis on the construction of an abstract model from real-world entities. The effectiveness of and future improvements to OMT are discussed from the standpoint of the system's expandability. These discussions have elucidated that the methodology is sufficiently well-organized and practical to be applied to commercial products, provided that it is applied to the appropriate problem domain. (author)

  1. An object model for genome information at all levels of resolution

    Energy Technology Data Exchange (ETDEWEB)

    Honda, S.; Parrott, N.W.; Smith, R.; Lawrence, C.

    1993-12-31

    An object model for genome data at all levels of resolution is described. The model was derived by considering the requirements for representing genome related objects in three application domains: genome maps, large-scale DNA sequencing, and exploring functional information in gene and protein sequences. The methodology used for the object-oriented analysis is also described.

  2. Modeling Organic Contaminant Desorption from Municipal Solid Waste Components

    Science.gov (United States)

    Knappe, D. R.; Wu, B.; Barlaz, M. A.

    2002-12-01

    Approximately 25% of the sites on the National Priority List (NPL) of Superfund are municipal landfills that accepted hazardous waste. Unlined landfills typically result in groundwater contamination, and priority pollutants such as alkylbenzenes are often present. To select cost-effective risk management alternatives, better information on factors controlling the fate of hydrophobic organic contaminants (HOCs) in landfills is required. The objectives of this study were (1) to investigate the effects of HOC aging time, anaerobic sorbent decomposition, and leachate composition on HOC desorption rates, and (2) to simulate HOC desorption rates from polymers and biopolymer composites with suitable diffusion models. Experiments were conducted with individual components of municipal solid waste (MSW) including polyvinyl chloride (PVC), high-density polyethylene (HDPE), newsprint, office paper, and model food and yard waste (rabbit food). Each of the biopolymer composites (office paper, newsprint, rabbit food) was tested in both fresh and anaerobically decomposed form. To determine the effects of aging on alkylbenzene desorption rates, batch desorption tests were performed after sorbents were exposed to toluene for 30 and 250 days in flame-sealed ampules. Desorption tests showed that alkylbenzene desorption rates varied greatly among MSW components (PVC slowest, fresh rabbit food and newsprint fastest). Furthermore, desorption rates decreased as aging time increased. A single-parameter polymer diffusion model successfully described PVC and HDPE desorption data, but it failed to simulate desorption rate data for biopolymer composites. For biopolymer composites, a three-parameter biphasic polymer diffusion model was employed, which successfully simulated both the initial rapid and the subsequent slow desorption of toluene. Toluene desorption rates from MSW mixtures were predicted for typical MSW compositions in the years 1960 and 1997. For the older MSW mixture, which had a

  3. Object linking in repositories

    Science.gov (United States)

    Eichmann, David (Editor); Beck, Jon; Atkins, John; Bailey, Bill

    1992-01-01

    This topic is covered in three sections. The first section explores some of the architectural ramifications of extending the Eichmann/Atkins lattice-based classification scheme to encompass the assets of the full life cycle of software development. A model is considered that provides explicit links between objects in addition to the edges connecting classification vertices in the standard lattice. The second section gives a description of the efforts to implement the repository architecture using a commercially available object-oriented database management system. Some of the features of this implementation are described, and some of the next steps to be taken to produce a working prototype of the repository are pointed out. In the final section, it is argued that design and instantiation of reusable components have competing criteria (design-for-reuse strives for generality, design-with-reuse strives for specificity) and that providing mechanisms for each can be complementary rather than antagonistic. In particular, it is demonstrated how program slicing techniques can be applied to customization of reusable components.

  4. Priming Contour-Deleted Images: Evidence for Immediate Representations in Visual Object Recognition.

    Science.gov (United States)

    Biederman, Irving; Cooper, Eric E.

    1991-01-01

    Speed and accuracy of identification of pictures of objects are facilitated by prior viewing. Contributions of image features, convex or concave components, and object models in a repetition priming task were explored in 2 studies involving 96 college students. Results provide evidence of intermediate representations in visual object recognition.…

  5. A proposed centralised distribution model for the South African automotive component industry

    Directory of Open Access Journals (Sweden)

    Micheline J. Naude

    2009-12-01

    Full Text Available Purpose: This article explores the possibility of developing a distribution model, similar to the model developed and implemented by the South African pharmaceutical industry, which could be implemented by automotive component manufacturers for supply to independent retailers. Problem Investigated: The South African automotive components distribution chain is extensive with a number of players of varying sizes, from the larger spares distribution groups to a number of independent retailers. Distributing to the smaller independent retailers is costly for the automotive component manufacturers. Methodology: This study is based on a preliminary study of an explorative nature. Interviews were conducted with a senior staff member from a leading automotive component manufacturer in KwaZulu Natal and nine participants at a senior management level at five of their main customers (aftermarket retailers. Findings: The findings from the empirical study suggest that the aftermarket component industry is mature with the role players well established. The distribution chain to the independent retailer is expensive in terms of transaction and distribution costs for the automotive component manufacturer. A proposed centralised distribution model for supply to independent retailers has been developed which should reduce distribution costs for the automotive component manufacturer in terms of (1 the lowest possible freight rate; (2 timely and controlled delivery; and (3 reduced congestion at the customer's receiving dock. Originality: This research is original in that it explores the possibility of implementing a centralised distribution model for independent retailers in the automotive component industry. Furthermore, there is a dearth of published research on the South African automotive component industry particularly addressing distribution issues. Conclusion: The distribution model as suggested is a practical one and should deliver added value to automotive

  6. Penalising Model Component Complexity: A Principled, Practical Approach to Constructing Priors

    KAUST Repository

    Simpson, Daniel

    2017-04-06

    In this paper, we introduce a new concept for constructing prior distributions. We exploit the natural nested structure inherent to many model components, which defines the model component to be a flexible extension of a base model. Proper priors are defined to penalise the complexity induced by deviating from the simpler base model and are formulated after the input of a user-defined scaling parameter for that model component, both in the univariate and the multivariate case. These priors are invariant to repa-rameterisations, have a natural connection to Jeffreys\\' priors, are designed to support Occam\\'s razor and seem to have excellent robustness properties, all which are highly desirable and allow us to use this approach to define default prior distributions. Through examples and theoretical results, we demonstrate the appropriateness of this approach and how it can be applied in various situations.

  7. Penalising Model Component Complexity: A Principled, Practical Approach to Constructing Priors

    KAUST Repository

    Simpson, Daniel; Rue, Haavard; Riebler, Andrea; Martins, Thiago G.; Sø rbye, Sigrunn H.

    2017-01-01

    In this paper, we introduce a new concept for constructing prior distributions. We exploit the natural nested structure inherent to many model components, which defines the model component to be a flexible extension of a base model. Proper priors are defined to penalise the complexity induced by deviating from the simpler base model and are formulated after the input of a user-defined scaling parameter for that model component, both in the univariate and the multivariate case. These priors are invariant to repa-rameterisations, have a natural connection to Jeffreys' priors, are designed to support Occam's razor and seem to have excellent robustness properties, all which are highly desirable and allow us to use this approach to define default prior distributions. Through examples and theoretical results, we demonstrate the appropriateness of this approach and how it can be applied in various situations.

  8. Feedforward Object-Vision Models Only Tolerate Small Image Variations Compared to Human

    Directory of Open Access Journals (Sweden)

    Masoud eGhodrati

    2014-07-01

    Full Text Available Invariant object recognition is a remarkable ability of primates' visual system that its underlying mechanism has constantly been under intense investigations. Computational modelling is a valuable tool toward understanding the processes involved in invariant object recognition. Although recent computational models have shown outstanding performances on challenging image databases, they fail to perform well when images with more complex variations of the same object are applied to them. Studies have shown that making sparse representation of objects by extracting more informative visual features through a feedforward sweep can lead to higher recognition performances. Here, however, we show that when the complexity of image variations is high, even this approach results in poor performance compared to humans. To assess the performance of models and humans in invariant object recognition tasks, we built a parametrically controlled image database consisting of several object categories varied in different dimensions and levels, rendered from 3D planes. Comparing the performance of several object recognition models with human observers shows that only in low-level image variations the models perform similar to humans in categorization tasks. Furthermore, the results of our behavioral experiments demonstrate that, even under difficult experimental conditions (i.e. briefly presented masked stimuli with complex image variations, human observers performed outstandingly well, suggesting that the models are still far from resembling humans in invariant object recognition. Taken together, we suggest that learning sparse informative visual features, although desirable, is not a complete solution for future progresses in object-vision modelling. We show that this approach is not of significant help in solving the computational crux of object recognition (that is invariant object recognition when the identity-preserving image variations become more complex.

  9. Efficient view based 3-D object retrieval using Hidden Markov Model

    Science.gov (United States)

    Jain, Yogendra Kumar; Singh, Roshan Kumar

    2013-12-01

    Recent research effort has been dedicated to view based 3-D object retrieval, because of highly discriminative property of 3-D object and has multi view representation. The state-of-art method is highly depending on their own camera array setting for capturing views of 3-D object and use complex Zernike descriptor, HAC for representative view selection which limit their practical application and make it inefficient for retrieval. Therefore, an efficient and effective algorithm is required for 3-D Object Retrieval. In order to move toward a general framework for efficient 3-D object retrieval which is independent of camera array setting and avoidance of representative view selection, we propose an Efficient View Based 3-D Object Retrieval (EVBOR) method using Hidden Markov Model (HMM). In this framework, each object is represented by independent set of view, which means views are captured from any direction without any camera array restriction. In this, views are clustered (including query view) to generate the view cluster, which is then used to build the query model with HMM. In our proposed method, HMM is used in twofold: in the training (i.e. HMM estimate) and in the retrieval (i.e. HMM decode). The query model is trained by using these view clusters. The EVBOR query model is worked on the basis of query model combining with HMM. The proposed approach remove statically camera array setting for view capturing and can be apply for any 3-D object database to retrieve 3-D object efficiently and effectively. Experimental results demonstrate that the proposed scheme has shown better performance than existing methods. [Figure not available: see fulltext.

  10. Infrared polarimetry and photometry of BL Lac objects. 3

    Energy Technology Data Exchange (ETDEWEB)

    Holmes, P A; Brand, P W.J.L. [Edinburgh Univ. (UK). Dept. of Astronomy; Impey, C D [Hawaii Univ., Honolulu (USA). Inst. for Astronomy; Williams, P M [UKIRT, Hilo, HI (USA)

    1984-10-15

    The data presented here is part of a continuing monitoring programme of BL lac objects with J, H and K photometry and polarimetry. A total of 30 BL Lac objects have now been observed photometrically. Infrared polarimetry has also been obtained for 24 of these objects. The sample is sufficiently large to examine statistically, and several important correlations have emerged. Internight variations and wavelength dependence of polarization indicate that BL Lac objects, as a class, may be understood in terms of a relatively simple two-component model.

  11. Critically Important Object Security System Element Model

    Directory of Open Access Journals (Sweden)

    I. V. Khomyackov

    2012-03-01

    Full Text Available A stochastic model of critically important object security system element has been developed. The model includes mathematical description of the security system element properties and external influences. The state evolution of the security system element is described by the semi-Markov process with finite states number, the semi-Markov matrix and the initial semi-Markov process states probabilities distribution. External influences are set with the intensity of the Poisson thread.

  12. Evaluating fugacity models for trace components in landfill gas

    Energy Technology Data Exchange (ETDEWEB)

    Shafi, Sophie [Integrated Waste Management Centre, Sustainable Systems Department, Building 61, School of Industrial and Manufacturing Science, Cranfield University, Cranfield, Bedfordshire MK43 0AL (United Kingdom); Sweetman, Andrew [Department of Environmental Science, Lancaster University, Lancaster LA1 4YQ (United Kingdom); Hough, Rupert L. [Integrated Waste Management Centre, Sustainable Systems Department, Building 61, School of Industrial and Manufacturing Science, Cranfield University, Cranfield, Bedfordshire MK43 0AL (United Kingdom); Smith, Richard [Integrated Waste Management Centre, Sustainable Systems Department, Building 61, School of Industrial and Manufacturing Science, Cranfield University, Cranfield, Bedfordshire MK43 0AL (United Kingdom); Rosevear, Alan [Science Group - Waste and Remediation, Environment Agency, Reading RG1 8DQ (United Kingdom); Pollard, Simon J.T. [Integrated Waste Management Centre, Sustainable Systems Department, Building 61, School of Industrial and Manufacturing Science, Cranfield University, Cranfield, Bedfordshire MK43 0AL (United Kingdom)]. E-mail: s.pollard@cranfield.ac.uk

    2006-12-15

    A fugacity approach was evaluated to reconcile loadings of vinyl chloride (chloroethene), benzene, 1,3-butadiene and trichloroethylene in waste with concentrations observed in landfill gas monitoring studies. An evaluative environment derived from fictitious but realistic properties such as volume, composition, and temperature, constructed with data from the Brogborough landfill (UK) test cells was used to test a fugacity approach to generating the source term for use in landfill gas risk assessment models (e.g. GasSim). SOILVE, a dynamic Level II model adapted here for landfills, showed greatest utility for benzene and 1,3-butadiene, modelled under anaerobic conditions over a 10 year simulation. Modelled concentrations of these components (95 300 {mu}g m{sup -3}; 43 {mu}g m{sup -3}) fell within measured ranges observed in gas from landfills (24 300-180 000 {mu}g m{sup -3}; 20-70 {mu}g m{sup -3}). This study highlights the need (i) for representative and time-referenced biotransformation data; (ii) to evaluate the partitioning characteristics of organic matter within waste systems and (iii) for a better understanding of the role that gas extraction rate (flux) plays in producing trace component concentrations in landfill gas. - Fugacity for trace component in landfill gas.

  13. Towards a three-component model of fan loyalty: a case study of Chinese youth.

    Directory of Open Access Journals (Sweden)

    Xiao-xiao Zhang

    Full Text Available The term "fan loyalty" refers to the loyalty felt and expressed by a fan towards the object of his/her fanaticism in both everyday and academic discourses. However, much of the literature on fan loyalty has paid little attention to the topic from the perspective of youth pop culture. The present study explored the meaning of fan loyalty in the context of China. Data were collected by the method of in-depth interviews with 16 young Chinese people aged between 19 and 25 years who currently or once were pop fans. The results indicated that fan loyalty entails three components: involvement, satisfaction, and affiliation. These three components regulate the process of fan loyalty development, which can be divided into four stages: inception, upgrade, zenith, and decline. This model provides a conceptual explanation of why and how young Chinese fans are loyal to their favorite stars. The implications of the findings are discussed.

  14. Towards a three-component model of fan loyalty: a case study of Chinese youth.

    Science.gov (United States)

    Zhang, Xiao-xiao; Liu, Li; Zhao, Xian; Zheng, Jian; Yang, Meng; Zhang, Ji-qi

    2015-01-01

    The term "fan loyalty" refers to the loyalty felt and expressed by a fan towards the object of his/her fanaticism in both everyday and academic discourses. However, much of the literature on fan loyalty has paid little attention to the topic from the perspective of youth pop culture. The present study explored the meaning of fan loyalty in the context of China. Data were collected by the method of in-depth interviews with 16 young Chinese people aged between 19 and 25 years who currently or once were pop fans. The results indicated that fan loyalty entails three components: involvement, satisfaction, and affiliation. These three components regulate the process of fan loyalty development, which can be divided into four stages: inception, upgrade, zenith, and decline. This model provides a conceptual explanation of why and how young Chinese fans are loyal to their favorite stars. The implications of the findings are discussed.

  15. A minimal model for two-component dark matter

    International Nuclear Information System (INIS)

    Esch, Sonja; Klasen, Michael; Yaguna, Carlos E.

    2014-01-01

    We propose and study a new minimal model for two-component dark matter. The model contains only three additional fields, one fermion and two scalars, all singlets under the Standard Model gauge group. Two of these fields, one fermion and one scalar, are odd under a Z_2 symmetry that renders them simultaneously stable. Thus, both particles contribute to the observed dark matter density. This model resembles the union of the singlet scalar and the singlet fermionic models but it contains some new features of its own. We analyze in some detail its dark matter phenomenology. Regarding the relic density, the main novelty is the possible annihilation of one dark matter particle into the other, which can affect the predicted relic density in a significant way. Regarding dark matter detection, we identify a new contribution that can lead either to an enhancement or to a suppression of the spin-independent cross section for the scalar dark matter particle. Finally, we define a set of five benchmarks models compatible with all present bounds and examine their direct detection prospects at planned experiments. A generic feature of this model is that both particles give rise to observable signals in 1-ton direct detection experiments. In fact, such experiments will be able to probe even a subdominant dark matter component at the percent level.

  16. Combining multi-objective optimization and bayesian model averaging to calibrate forecast ensembles of soil hydraulic models

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Wohling, Thomas [NON LANL

    2008-01-01

    Most studies in vadose zone hydrology use a single conceptual model for predictive inference and analysis. Focusing on the outcome of a single model is prone to statistical bias and underestimation of uncertainty. In this study, we combine multi-objective optimization and Bayesian Model Averaging (BMA) to generate forecast ensembles of soil hydraulic models. To illustrate our method, we use observed tensiometric pressure head data at three different depths in a layered vadose zone of volcanic origin in New Zealand. A set of seven different soil hydraulic models is calibrated using a multi-objective formulation with three different objective functions that each measure the mismatch between observed and predicted soil water pressure head at one specific depth. The Pareto solution space corresponding to these three objectives is estimated with AMALGAM, and used to generate four different model ensembles. These ensembles are post-processed with BMA and used for predictive analysis and uncertainty estimation. Our most important conclusions for the vadose zone under consideration are: (1) the mean BMA forecast exhibits similar predictive capabilities as the best individual performing soil hydraulic model, (2) the size of the BMA uncertainty ranges increase with increasing depth and dryness in the soil profile, (3) the best performing ensemble corresponds to the compromise (or balanced) solution of the three-objective Pareto surface, and (4) the combined multi-objective optimization and BMA framework proposed in this paper is very useful to generate forecast ensembles of soil hydraulic models.

  17. Integrating environmental component models. Development of a software framework

    NARCIS (Netherlands)

    Schmitz, O.

    2014-01-01

    Integrated models consist of interacting component models that represent various natural and social systems. They are important tools to improve our understanding of environmental systems, to evaluate cause–effect relationships of human–natural interactions, and to forecast the behaviour of

  18. Information model of the 'Ukryttya' object

    International Nuclear Information System (INIS)

    Batij, E.V.; Ermolenko, A.A.; Kotlyarov, V.T.

    2008-01-01

    There were described the building principles and content of the 'Ukryttya' object information model that has been developed at the Institute for Safety Problems of NPP. Using the client/server architecture in this system (the simultaneous access of the many users), Autodesk Map Guide and ASP.NET technologies allowed avoiding the typical defects of the 'stand-alone desktop' information systems (that aimed for a single user)

  19. Two-objective on-line optimization of supervisory control strategy

    Energy Technology Data Exchange (ETDEWEB)

    Nassif, N.; Kajl, S.; Sabourin, R. [Ecole de Technologie Superieure, Montreal (Canada)

    2004-09-01

    The set points of supervisory control strategy are optimized with respect to energy use and thermal comfort for existing HVAC systems. The set point values of zone temperatures, supply duct static pressure, and supply air temperature are the problem variables, while energy use and thermal comfort are the objective functions. The HVAC system model includes all the individual component models developed and validated against the monitored data of an existing VAV system. It serves to calculate energy use during the optimization process, whereas the actual energy use is determined by using monitoring data and the appropriate validated component models. A comparison, done for one summer week, of actual and optimal energy use shows that the on-line implementation of a genetic algorithm optimization program to determine the optimal set points of supervisory control strategy could save energy by 19.5%, while satisfying the minimum zone airflow rates and the thermal comfort. The results also indicate that the application of the two-objective optimization problem can help control daily energy use or daily building thermal comfort, thus saving more energy than the application of the one-objective optimization problem. (Author)

  20. Generalized modeling of multi-component vaporization/condensation phenomena for multi-phase-flow analysis

    International Nuclear Information System (INIS)

    Morita, K.; Fukuda, K.; Tobita, Y.; Kondo, Sa.; Suzuki, T.; Maschek, W.

    2003-01-01

    A new multi-component vaporization/condensation (V/C) model was developed to provide a generalized model for safety analysis codes of liquid metal cooled reactors (LMRs). These codes simulate thermal-hydraulic phenomena of multi-phase, multi-component flows, which is essential to investigate core disruptive accidents of LMRs such as fast breeder reactors and accelerator driven systems. The developed model characterizes the V/C processes associated with phase transition by employing heat transfer and mass-diffusion limited models for analyses of relatively short-time-scale multi-phase, multi-component hydraulic problems, among which vaporization and condensation, or simultaneous heat and mass transfer, play an important role. The heat transfer limited model describes the non-equilibrium phase transition processes occurring at interfaces, while the mass-diffusion limited model is employed to represent effects of non-condensable gases and multi-component mixture on V/C processes. Verification of the model and method employed in the multi-component V/C model of a multi-phase flow code was performed successfully by analyzing a series of multi-bubble condensation experiments. The applicability of the model to the accident analysis of LMRs is also discussed by comparison between steam and metallic vapor systems. (orig.)

  1. Metodology of identification parameters of models control objects of automatic trailing system

    Directory of Open Access Journals (Sweden)

    I.V. Zimchuk

    2017-04-01

    Full Text Available The determining factor for the successful solution of the problem of synthesis of optimal control systems of different processes are adequacy of mathematical model of control object. In practice, the options can differ from the objects taken priori, causing a need to clarification of them. In this context, the article presents the results of the development and application of methods parameters identification of mathematical models of control object of automatic trailing system. The stated problem in the article is solved provided that control object is fully controlled and observed, and a differential equation of control object is known a priori. The coefficients of this equation to be determined. Identifying quality criterion is to minimize the integral value of squared error of identification. The method is based on a description of the dynamics of the object in space state. Equation of identification synthesized using the vector-matrix representation of model. This equation describes the interconnection of coefficients of matrix state and control with inputs and outputs of object. The initial data for calculation are the results of experimental investigation of the reaction of phase coordinates of control object at a typical input signal. The process of calculating the model parameters is reduced to solving the system of equations of the first order each. Application the above approach is illustrated in the example identification of coefficients transfer function of control object first order. Results of digital simulation are presented, they are confirming the justice of set out mathematical calculations. The approach enables to do the identification of models of one-dimensional and multidimensional objects and does not require a large amount of calculation for its implementation. The order of identified model is limited capabilities of measurement phase coordinates of corresponding control object. The practical significance of the work is

  2. An object-oriented framework for the hadronic Monte-Carlo event generators

    International Nuclear Information System (INIS)

    Amelin, N.; Komogorov, M.

    1999-01-01

    We advocate the development of an object-oriented framework for the hadronic Monte-Carlo (MC) event generators. The hadronic MC user and developer requirements are discussed as well as the hadronic model commonalities. It is argued that the development of a framework is in favour of taking into account of model commonalities since common means are stable and can be developed only at once. Such framework can provide different possibilities to have user session more convenient and productive, e.g., an easy access and edition of any model parameter, substitution of the model components by the alternative model components without changing the code, customized output, which offers either full information about history of generated event or specific information about reaction final state, etc. Such framework can indeed increase the productivity of a hadronic model developer, particularly, due to the formalization of the hadronic model component structure and model component collaborations. The framework based on the component approach opens a way to organize a library of the hadronic model components, which can be considered as the pool of hadronic model building blocks. Basic features, code structure and working examples of the first framework version for the hadronic MC models, which has been built as the starting point, are shortly explained

  3. Engineering the object-relation database model in O-Raid

    Science.gov (United States)

    Dewan, Prasun; Vikram, Ashish; Bhargava, Bharat

    1989-01-01

    Raid is a distributed database system based on the relational model. O-raid is an extension of the Raid system and will support complex data objects. The design of O-Raid is evolutionary and retains all features of relational data base systems and those of a general purpose object-oriented programming language. O-Raid has several novel properties. Objects, classes, and inheritance are supported together with a predicate-base relational query language. O-Raid objects are compatible with C++ objects and may be read and manipulated by a C++ program without any 'impedance mismatch'. Relations and columns within relations may themselves be treated as objects with associated variables and methods. Relations may contain heterogeneous objects, that is, objects of more than one class in a certain column, which can individually evolve by being reclassified. Special facilities are provided to reduce the data search in a relation containing complex objects.

  4. How to constrain multi-objective calibrations of the SWAT model using water balance components

    Science.gov (United States)

    Automated procedures are often used to provide adequate fits between hydrologic model estimates and observed data. While the models may provide good fits based upon numeric criteria, they may still not accurately represent the basic hydrologic characteristics of the represented watershed. Here we ...

  5. Multi-component fiber track modelling of diffusion-weighted magnetic resonance imaging data

    Directory of Open Access Journals (Sweden)

    Yasser M. Kadah

    2010-01-01

    Full Text Available In conventional diffusion tensor imaging (DTI based on magnetic resonance data, each voxel is assumed to contain a single component having diffusion properties that can be fully represented by a single tensor. Even though this assumption can be valid in some cases, the general case involves the mixing of components, resulting in significant deviation from the single tensor model. Hence, a strategy that allows the decomposition of data based on a mixture model has the potential of enhancing the diagnostic value of DTI. This project aims to work towards the development and experimental verification of a robust method for solving the problem of multi-component modelling of diffusion tensor imaging data. The new method demonstrates significant error reduction from the single-component model while maintaining practicality for clinical applications, obtaining more accurate Fiber tracking results.

  6. Spatially-Distributed Stream Flow and Nutrient Dynamics Simulations Using the Component-Based AgroEcoSystem-Watershed (AgES-W) Model

    Science.gov (United States)

    Ascough, J. C.; David, O.; Heathman, G. C.; Smith, D. R.; Green, T. R.; Krause, P.; Kipka, H.; Fink, M.

    2010-12-01

    The Object Modeling System 3 (OMS3), currently being developed by the USDA-ARS Agricultural Systems Research Unit and Colorado State University (Fort Collins, CO), provides a component-based environmental modeling framework which allows the implementation of single- or multi-process modules that can be developed and applied as custom-tailored model configurations. OMS3 as a “lightweight” modeling framework contains four primary foundations: modeling resources (e.g., components) annotated with modeling metadata; domain specific knowledge bases and ontologies; tools for calibration, sensitivity analysis, and model optimization; and methods for model integration and performance scalability. The core is able to manage modeling resources and development tools for model and simulation creation, execution, evaluation, and documentation. OMS3 is based on the Java platform but is highly interoperable with C, C++, and FORTRAN on all major operating systems and architectures. The ARS Conservation Effects Assessment Project (CEAP) Watershed Assessment Study (WAS) Project Plan provides detailed descriptions of ongoing research studies at 14 benchmark watersheds in the United States. In order to satisfy the requirements of CEAP WAS Objective 5 (“develop and verify regional watershed models that quantify environmental outcomes of conservation practices in major agricultural regions”), a new watershed model development approach was initiated to take advantage of OMS3 modeling framework capabilities. Specific objectives of this study were to: 1) disaggregate and refactor various agroecosystem models (e.g., J2K-S, SWAT, WEPP) and implement hydrological, N dynamics, and crop growth science components under OMS3, 2) assemble a new modular watershed scale model for fully-distributed transfer of water and N loading between land units and stream channels, and 3) evaluate the accuracy and applicability of the modular watershed model for estimating stream flow and N dynamics. The

  7. Object recognition in images via a factor graph model

    Science.gov (United States)

    He, Yong; Wang, Long; Wu, Zhaolin; Zhang, Haisu

    2018-04-01

    Object recognition in images suffered from huge search space and uncertain object profile. Recently, the Bag-of- Words methods are utilized to solve these problems, especially the 2-dimension CRF(Conditional Random Field) model. In this paper we suggest the method based on a general and flexible fact graph model, which can catch the long-range correlation in Bag-of-Words by constructing a network learning framework contrasted from lattice in CRF. Furthermore, we explore a parameter learning algorithm based on the gradient descent and Loopy Sum-Product algorithms for the factor graph model. Experimental results on Graz 02 dataset show that, the recognition performance of our method in precision and recall is better than a state-of-art method and the original CRF model, demonstrating the effectiveness of the proposed method.

  8. Design of a Model Execution Framework: Repetitive Object-Oriented Simulation Environment (ROSE)

    Science.gov (United States)

    Gray, Justin S.; Briggs, Jeffery L.

    2008-01-01

    The ROSE framework was designed to facilitate complex system analyses. It completely divorces the model execution process from the model itself. By doing so ROSE frees the modeler to develop a library of standard modeling processes such as Design of Experiments, optimizers, parameter studies, and sensitivity studies which can then be applied to any of their available models. The ROSE framework accomplishes this by means of a well defined API and object structure. Both the API and object structure are presented here with enough detail to implement ROSE in any object-oriented language or modeling tool.

  9. Model-Based Software Testing for Object-Oriented Software

    Science.gov (United States)

    Biju, Soly Mathew

    2008-01-01

    Model-based testing is one of the best solutions for testing object-oriented software. It has a better test coverage than other testing styles. Model-based testing takes into consideration behavioural aspects of a class, which are usually unchecked in other testing methods. An increase in the complexity of software has forced the software industry…

  10. Сontrol systems using mathematical models of technological objects ...

    African Journals Online (AJOL)

    Сontrol systems using mathematical models of technological objects in the control loop. ... Journal of Fundamental and Applied Sciences ... Such mathematical models make it possible to specify the optimal operating modes of the considered ...

  11. A multi-objective optimization model for energy-efficiency building envelope retrofitting plan with rooftop PV system installation and maintenance

    International Nuclear Information System (INIS)

    Fan, Yuling; Xia, Xiaohua

    2017-01-01

    Highlights: • A multi-objective optimization model for building envelope retrofit is presented. • Facility performance degradation and maintenance is built into the model. • A rooftop PV system is introduced to produce electricity. • Economic factors including net present value and payback period are considered. - Abstract: Retrofitting existing buildings with energy-efficient facilities is an effective method to improve their energy efficiency, especially for old buildings. A multi-objective optimization model for building envelope retrofitting is presented. Envelope components including windows, external walls and roofs are considered to be retrofitted. Installation of a rooftop solar panel system is also taken into consideration in this study. Rooftop solar panels are modeled with their degradation and a maintenance scheme is studied for sustainability of energy and its long-term effect on the retrofitting plan. The purpose is to make the best use of financial investment to maximize energy savings and economic benefits. In particular, net present value, the payback period and energy savings are taken as the main performance indicators of the retrofitting plan. The multi-objective optimization problem is formulated as a non-linear integer programming problem and solved by a weighted sum method. Results of applying the designed retrofitting plan to a 50-year-old building consisting of 66 apartments demonstrated the effectiveness of the proposed model.

  12. Modeling real conditions of 'Ukrytie' object in 3D measurement

    International Nuclear Information System (INIS)

    Podbereznyj, S.S.

    2001-01-01

    The article covers a technology of creation on soft products basis for designing: AutoCad, and computer graphics and animation 3D Studio, 3DS MAX, of 3D model of geometrical parameters of current conditions of building structures, technological equipment, fuel-containing materials, concrete, water of ruined Unit 4, 'Ukryttia' object, of Chernobyl NPP. The model built using the above technology will be applied in the future as a basis when automating the design and computer modeling of processes at the 'Ukryttia' object

  13. Low-level profiling and MARTE-compatible modeling of software components for real-time systems

    NARCIS (Netherlands)

    Triantafyllidis, K.; Bondarev, E.; With, de P.H.N.

    2012-01-01

    In this paper, we present a method for (a) profiling of individual components at high accuracy level, (b) modeling of the components with the accurate data obtained from profiling, and (c) model conversion to the MARTE profile. The resulting performance models of individual components are used at

  14. Component Degradation Susceptibilities As The Bases For Modeling Reactor Aging Risk

    International Nuclear Information System (INIS)

    Unwin, Stephen D.; Lowry, Peter P.; Toyooka, Michael Y.

    2010-01-01

    The extension of nuclear power plant operating licenses beyond 60 years in the United States will be necessary if we are to meet national energy needs while addressing the issues of carbon and climate. Characterizing the operating risks associated with aging reactors is problematic because the principal tool for risk-informed decision-making, Probabilistic Risk Assessment (PRA), is not ideally-suited to addressing aging systems. The components most likely to drive risk in an aging reactor - the passives - receive limited treatment in PRA, and furthermore, standard PRA methods are based on the assumption of stationary failure rates: a condition unlikely to be met in an aging system. A critical barrier to modeling passives aging on the wide scale required for a PRA is that there is seldom sufficient field data to populate parametric failure models, and nor is there the availability of practical physics models to predict out-year component reliability. The methodology described here circumvents some of these data and modeling needs by using materials degradation metrics, integrated with conventional PRA models, to produce risk importance measures for specific aging mechanisms and component types. We suggest that these measures have multiple applications, from the risk-screening of components to the prioritization of materials research.

  15. Sparse Principal Component Analysis in Medical Shape Modeling

    DEFF Research Database (Denmark)

    Sjöstrand, Karl; Stegmann, Mikkel Bille; Larsen, Rasmus

    2006-01-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims...... analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of sufficiently small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA...

  16. Photonic Beamformer Model Based on Analog Fiber-Optic Links’ Components

    International Nuclear Information System (INIS)

    Volkov, V A; Gordeev, D A; Ivanov, S I; Lavrov, A P; Saenko, I I

    2016-01-01

    The model of photonic beamformer for wideband microwave phased array antenna is investigated. The main features of the photonic beamformer model based on true-time-delay technique, DWDM technology and fiber chromatic dispersion are briefly analyzed. The performance characteristics of the key components of photonic beamformer for phased array antenna in the receive mode are examined. The beamformer model composed of the components available on the market of fiber-optic analog communication links is designed and tentatively investigated. Experimental demonstration of the designed model beamforming features includes actual measurement of 5-element microwave linear array antenna far-field patterns in 6-16 GHz frequency range for antenna pattern steering up to 40°. The results of experimental testing show good accordance with the calculation estimates. (paper)

  17. An architecture for object-oriented intelligent control of power systems in space

    Science.gov (United States)

    Holmquist, Sven G.; Jayaram, Prakash; Jansen, Ben H.

    1993-01-01

    A control system for autonomous distribution and control of electrical power during space missions is being developed. This system should free the astronauts from localizing faults and reconfiguring loads if problems with the power distribution and generation components occur. The control system uses an object-oriented simulation model of the power system and first principle knowledge to detect, identify, and isolate faults. Each power system component is represented as a separate object with knowledge of its normal behavior. The reasoning process takes place at three different levels of abstraction: the Physical Component Model (PCM) level, the Electrical Equivalent Model (EEM) level, and the Functional System Model (FSM) level, with the PCM the lowest level of abstraction and the FSM the highest. At the EEM level the power system components are reasoned about as their electrical equivalents, e.g, a resistive load is thought of as a resistor. However, at the PCM level detailed knowledge about the component's specific characteristics is taken into account. The FSM level models the system at the subsystem level, a level appropriate for reconfiguration and scheduling. The control system operates in two modes, a reactive and a proactive mode, simultaneously. In the reactive mode the control system receives measurement data from the power system and compares these values with values determined through simulation to detect the existence of a fault. The nature of the fault is then identified through a model-based reasoning process using mainly the EEM. Compound component models are constructed at the EEM level and used in the fault identification process. In the proactive mode the reasoning takes place at the PCM level. Individual components determine their future health status using a physical model and measured historical data. In case changes in the health status seem imminent the component warns the control system about its impending failure. The fault isolation

  18. Object-oriented analysis and design of a health care management information system.

    Science.gov (United States)

    Krol, M; Reich, D L

    1999-04-01

    We have created a prototype for a universal object-oriented model of a health care system compatible with the object-oriented approach used in version 3.0 of the HL7 standard for communication messages. A set of three models has been developed: (1) the Object Model describes the hierarchical structure of objects in a system--their identity, relationships, attributes, and operations; (2) the Dynamic Model represents the sequence of operations in time as a collection of state diagrams for object classes in the system; and (3) functional Diagram represents the transformation of data within a system by means of data flow diagrams. Within these models, we have defined major object classes of health care participants and their subclasses, associations, attributes and operators, states, and behavioral scenarios. We have also defined the major processes and subprocesses. The top-down design approach allows use, reuse, and cloning of standard components.

  19. Research on development model of nuclear component based on life cycle management

    International Nuclear Information System (INIS)

    Bao Shiyi; Zhou Yu; He Shuyan

    2005-01-01

    At present the development process of nuclear component, even nuclear component itself, is more and more supported by computer technology. This increasing utilization of the computer and software has led to the faster development of nuclear technology on one hand and also brought new problems on the other hand. Especially, the combination of hardware, software and humans has increased nuclear component system complexities to an unprecedented level. To solve this problem, Life Cycle Management technology is adopted in nuclear component system. Hence, an intensive discussion on the development process of a nuclear component is proposed. According to the characteristics of the nuclear component development, such as the complexities and strict safety requirements of the nuclear components, long-term design period, changeable design specifications and requirements, high capital investment, and satisfaction for engineering codes/standards, the development life-cycle model of nuclear component is presented. The development life-cycle model is classified at three levels, namely, component level development life-cycle, sub-component development life-cycle and component level verification/certification life-cycle. The purposes and outcomes of development processes are stated in detailed. A process framework for nuclear component based on system engineering and development environment of nuclear component is discussed for future research work. (authors)

  20. Final Report: Legion Core Object Model, March 1, 1996 - September 30, 1999

    Energy Technology Data Exchange (ETDEWEB)

    Grimshaw, Andrew S.

    1999-09-30

    The model specifies the composition and functionality of Legion's core objects - those objects that cooperate to create, locate, manage, and remove objects from the legion project. In particular, the object model facilitates a flexible extensible implementation, provides a single persistent name space, grants site autonomy to participating organizations, and scales to millions of sites and trillions of objects. Further, it offers a framework that is well suited to providing mechanisms for high performance, security, fault tolerance and commerce.

  1. Superluminal motion of extragalactic objects

    International Nuclear Information System (INIS)

    Matveenko, L.I.

    1983-01-01

    Extragalactic objects with active nuclei are reviewed. Experimental data are obtained with the method of superfar radiointerferometry. The main peculiarities of the complex strUcture of Seyfert galaxies quasars and lacertae ob ects are considered: the distribution of radiobrightness, spectra, alteration of the density of radiation flux and the distance between the components of sources. The superluminal velocities of component divergence observed are explained by different reasons: fast motion of components considerable difference of the Hubble component or non-cosmologic nature of the red shift of objects, effect of echoreflection of radiation, gravitation lens, systematic alteration of the optical thickness of the object, synchronoUs radiation of electrons in the dipole magnetic field, as well as different kinematic illusions connected with the final time of signal propagation

  2. Multi-objective compared to single-objective optimization with application to model validation and uncertainty quantification

    Energy Technology Data Exchange (ETDEWEB)

    Schulze-Riegert, R.; Krosche, M.; Stekolschikov, K. [Scandpower Petroleum Technology GmbH, Hamburg (Germany); Fahimuddin, A. [Technische Univ. Braunschweig (Germany)

    2007-09-13

    History Matching in Reservoir Simulation, well location and production optimization etc. is generally a multi-objective optimization problem. The problem statement of history matching for a realistic field case includes many field and well measurements in time and type, e.g. pressure measurements, fluid rates, events such as water and gas break-throughs, etc. Uncertainty parameters modified as part of the history matching process have varying impact on the improvement of the match criteria. Competing match criteria often reduce the likelihood of finding an acceptable history match. It is an engineering challenge in manual history matching processes to identify competing objectives and to implement the changes required in the simulation model. In production optimization or scenario optimization the focus on one key optimization criterion such as NPV limits the identification of alternatives and potential opportunities, since multiple objectives are summarized in a predefined global objective formulation. Previous works primarily focus on a specific optimization method. Few works actually concentrate on the objective formulation and multi-objective optimization schemes have not yet been applied to reservoir simulations. This paper presents a multi-objective optimization approach applicable to reservoir simulation. It addresses the problem of multi-objective criteria in a history matching study and presents analysis techniques identifying competing match criteria. A Pareto-Optimizer is discussed and the implementation of that multi-objective optimization scheme is applied to a case study. Results are compared to a single-objective optimization method. (orig.)

  3. Models for composing software : an analysis of software composition and objects

    NARCIS (Netherlands)

    Bergmans, Lodewijk

    1999-01-01

    In this report, we investigate component-based software construction with a focus on composition. In particular we try to analyze the requirements and issues for components and software composition. As a means to understand this research area, we introduce a canonical model for representing

  4. Child's objection to non-beneficial research: capacity and distress based models.

    Science.gov (United States)

    Waligora, Marcin; Różyńska, Joanna; Piasecki, Jan

    2016-03-01

    A child's objection, refusal and dissent regarding participation in non-beneficial biomedical research must be respected, even when the parents or legal representatives have given their permission. There is, however, no consensus on the definition and criteria of a meaningful and valid child's objection. The aim of this article is to clarify this issue. In the first part we describe the problems of a child's assent in research. In the second part we distinguish and analyze two models of a child's objection to research: the capacity-based model and the distress-based model. In the last part we present arguments for a broader and unified understanding of a child's objection within regulations and practices. This will strengthen children's rights and facilitate the entire process of assessment of research protocols.

  5. The n-component cubic model and flows: subgraph break-collapse method

    International Nuclear Information System (INIS)

    Essam, J.W.; Magalhaes, A.C.N. de.

    1988-01-01

    We generalise to the n-component cubic model the subgraph break-collapse method which we previously developed for the Potts model. The relations used are based on expressions which we recently derived for the Z(λ) model in terms of mod-λ flows. Our recursive algorithm is similar, for n = 2, to the break-collapse method for the Z(4) model proposed by Mariz and coworkers. It allows the exact calculation for the partition function and correlation functions for n-component cubic clusters with n as a variable, without the need to examine all of the spin configurations. (author) [pt

  6. Field Model: An Object-Oriented Data Model for Fields

    Science.gov (United States)

    Moran, Patrick J.

    2001-01-01

    We present an extensible, object-oriented data model designed for field data entitled Field Model (FM). FM objects can represent a wide variety of fields, including fields of arbitrary dimension and node type. FM can also handle time-series data. FM achieves generality through carefully selected topological primitives and through an implementation that leverages the potential of templated C++. FM supports fields where the nodes values are paired with any cell type. Thus FM can represent data where the field nodes are paired with the vertices ("vertex-centered" data), fields where the nodes are paired with the D-dimensional cells in R(sup D) (often called "cell-centered" data), as well as fields where nodes are paired with edges or other cell types. FM is designed to effectively handle very large data sets; in particular FM employs a demand-driven evaluation strategy that works especially well with large field data. Finally, the interfaces developed for FM have the potential to effectively abstract field data based on adaptive meshes. We present initial results with a triangular adaptive grid in R(sup 2) and discuss how the same design abstractions would work equally well with other adaptive-grid variations, including meshes in R(sup 3).

  7. MODELING OF TECHNICAL CHANNELS OF INFORMATION LEAKAGE AT DISTRIBUTED CONTROL OBJECTS

    Directory of Open Access Journals (Sweden)

    Aleksander Vladimirovich Karpov

    2018-05-01

    Full Text Available The significant increase in requirements for distributed control objects’ functioning can’t be realized only at the expense of the widening and strengthening of security control measures. The first step in ensuring the information security at such objects is the analysis of the conditions of their functioning and modeling of technical channels of information leakage. The development of models of such channels is essentially the only method of complete study of their opportunities and it is pointed toward receiving quantitative assessments of the safe operation of compound objects. The evaluation data are necessary to make a decision on the degree of the information security from a leak according to the current criterion. The existing models are developed for the standard concentrated objects and allow to evaluate the level of information security from a leak on each of channels separately, what involves the significant increase in the required protective resource and time of assessment of information security on an object in general. The article deals with a logical-and-probabilistic method of a security assessment of structurally-compound objects. The model of a security leak on the distributed control objects is cited as an example. It is recommended to use a software package of an automated structurally-logistical modeling of compound systems, which allows to evaluate risk of information leakage in the loudspeaker. A possibility of information leakage by technical channels is evaluated and such differential characteristics of the safe operation of the distributed control objects as positive and negative contributions of the initiating events and conditions, which cause a leak are calculated. Purpose. The aim is a quantitative assessment of data risk, which is necessary for justifying the rational composition of organizational and technical protection measures, as well as a variant of the structure of the information security system from a

  8. Coastal Modelling Environment version 1.0: a framework for integrating landform-specific component models in order to simulate decadal to centennial morphological changes on complex coasts

    Directory of Open Access Journals (Sweden)

    A. Payo

    2017-07-01

    Full Text Available The ability to model morphological changes on complex, multi-landform coasts over decadal to centennial timescales is essential for sustainable coastal management worldwide. One approach involves coupling of landform-specific simulation models (e.g. cliffs, beaches, dunes and estuaries that have been independently developed. An alternative, novel approach explored in this paper is to capture the essential characteristics of the landform-specific models using a common spatial representation within an appropriate software framework. This avoid the problems that result from the model-coupling approach due to between-model differences in the conceptualizations of geometries, volumes and locations of sediment. In the proposed framework, the Coastal Modelling Environment (CoastalME, change in coastal morphology is represented by means of dynamically linked raster and geometrical objects. A grid of raster cells provides the data structure for representing quasi-3-D spatial heterogeneity and sediment conservation. Other geometrical objects (lines, areas and volumes that are consistent with, and derived from, the raster structure represent a library of coastal elements (e.g. shoreline, beach profiles and estuary volumes as required by different landform-specific models. As a proof-of-concept, we illustrate the capabilities of an initial version of CoastalME by integrating a cliff–beach model and two wave propagation approaches. We verify that CoastalME can reproduce behaviours of the component landform-specific models. Additionally, the integration of these component models within the CoastalME framework reveals behaviours that emerge from the interaction of landforms, which have not previously been captured, such as the influence of the regional bathymetry on the local alongshore sediment-transport gradient and the effect on coastal change on an undefended coastal segment and on sediment bypassing of coastal structures.

  9. C++, objected-oriented programming, and astronomical data models

    Science.gov (United States)

    Farris, A.

    1992-01-01

    Contemporary astronomy is characterized by increasingly complex instruments and observational techniques, higher data collection rates, and large data archives, placing severe stress on software analysis systems. The object-oriented paradigm represents a significant new approach to software design and implementation that holds great promise for dealing with this increased complexity. The basic concepts of this approach will be characterized in contrast to more traditional procedure-oriented approaches. The fundamental features of objected-oriented programming will be discussed from a C++ programming language perspective, using examples familiar to astronomers. This discussion will focus on objects, classes and their relevance to the data type system; the principle of information hiding; and the use of inheritance to implement generalization/specialization relationships. Drawing on the object-oriented approach, features of a new database model to support astronomical data analysis will be presented.

  10. RANCANGAN DATABASE SUBSISTEM PRODUKSI DENGAN PENDEKATAN SEMANTIC OBJECT MODEL

    Directory of Open Access Journals (Sweden)

    Oviliani Yenty Yuliana

    2002-01-01

    Full Text Available To compete in the global market, business performer who active in industry fields should have and get information quickly and accurately, so they could make the precise decision. Traditional cost accounting system cannot give sufficient information, so many industries shift to Activity-Based Costing system (ABC. ABC system is more complex and need more data that should be save and process, so it should be applied information technology and database than traditional cost accounting system. The development of the software technology recently makes the construction of application program is not problem again. The primary problem is how to design database that presented information quickly and accurately. For that reason it necessary to make the model first. This paper discusses database modelling with semantic object model approach. This model is easier to use and is generate more normal database design than entity relationship model approach. Abstract in Bahasa Indonesia : Dalam persaingan di pasar bebas, para pelaku bisnis di bidang industri dalam membuat suatu keputusan yang tepat memerlukan informasi secara cepat dan akurat. Sistem akuntansi biaya tradisional tidak dapat menyediakan informasi yang memadai, sehingga banyak perusahaan industri yang beralih ke sistem Activity-Based Costing (ABC. Tetapi, sistem ABC merupakan sistem yang kompleks dan memerlukan banyak data yang harus disimpan dan diolah, sehingga harus menggunakan teknologi informasi dan database. Kemajuan di bidang perangkat lunak mengakibatkan pembuatan aplikasi program bukan masalah lagi. Permasalahan utama adalah bagaimana merancang database, agar dapat menyajikan informasi secara cepat dan akurat. Untuk itu, dalam makalah ini dibahas pemodelan database dengan pendekatan semantic object model. Model data ini lebih mudah digunakan dan menghasilkan transformasi yang lebih normal, jika dibandingkan dengan entity relationship model yang umum digunakan. Kata kunci: Sub Sistem

  11. A Four–Component Model of Age–Related Memory Change

    Science.gov (United States)

    Healey, M. Karl; Kahana, Michael J.

    2015-01-01

    We develop a novel, computationally explicit, theory of age–related memory change within the framework of the context maintenance and retrieval (CMR2) model of memory search. We introduce a set of benchmark findings from the free recall and recognition tasks that includes aspects of memory performance that show both age-related stability and decline. We test aging theories by lesioning the corresponding mechanisms in a model fit to younger adult free recall data. When effects are considered in isolation, many theories provide an adequate account, but when all effects are considered simultaneously, the existing theories fail. We develop a novel theory by fitting the full model (i.e., allowing all parameters to vary) to individual participants and comparing the distributions of parameter values for older and younger adults. This theory implicates four components: 1) the ability to sustain attention across an encoding episode, 2) the ability to retrieve contextual representations for use as retrieval cues, 3) the ability to monitor retrievals and reject intrusions, and 4) the level of noise in retrieval competitions. We extend CMR2 to simulate a recognition memory task using the same mechanisms the free recall model uses to reject intrusions. Without fitting any additional parameters, the four–component theory that accounts for age differences in free recall predicts the magnitude of age differences in recognition memory accuracy. Confirming a prediction of the model, free recall intrusion rates correlate positively with recognition false alarm rates. Thus we provide a four–component theory of a complex pattern of age differences across two key laboratory tasks. PMID:26501233

  12. Controlling Business Object States in Business Process Models to Support Compliance

    OpenAIRE

    Peņicina, L

    2016-01-01

    The doctoral thesis addresses the existing gap between business process models and states of business objects. Existing modelling methods such as BPMN and ArchiMate lack an explicitly declarative approach for capturing states of business objects and laws of state transitions. This gap hinders the compliance of business process models with regulations imposed internally or externally, and can result in potential legal problems for organizations. Also this g...

  13. D Modelling and Interactive Web-Based Visualization of Cultural Heritage Objects

    Science.gov (United States)

    Koeva, M. N.

    2016-06-01

    Nowadays, there are rapid developments in the fields of photogrammetry, laser scanning, computer vision and robotics, together aiming to provide highly accurate 3D data that is useful for various applications. In recent years, various LiDAR and image-based techniques have been investigated for 3D modelling because of their opportunities for fast and accurate model generation. For cultural heritage preservation and the representation of objects that are important for tourism and their interactive visualization, 3D models are highly effective and intuitive for present-day users who have stringent requirements and high expectations. Depending on the complexity of the objects for the specific case, various technological methods can be applied. The selected objects in this particular research are located in Bulgaria - a country with thousands of years of history and cultural heritage dating back to ancient civilizations. This motivates the preservation, visualisation and recreation of undoubtedly valuable historical and architectural objects and places, which has always been a serious challenge for specialists in the field of cultural heritage. In the present research, comparative analyses regarding principles and technological processes needed for 3D modelling and visualization are presented. The recent problems, efforts and developments in interactive representation of precious objects and places in Bulgaria are presented. Three technologies based on real projects are described: (1) image-based modelling using a non-metric hand-held camera; (2) 3D visualization based on spherical panoramic images; (3) and 3D geometric and photorealistic modelling based on architectural CAD drawings. Their suitability for web-based visualization are demonstrated and compared. Moreover the possibilities for integration with additional information such as interactive maps, satellite imagery, sound, video and specific information for the objects are described. This comparative study

  14. Hydrologic Model Development and Calibration: Contrasting a Single- and Multi-Objective Approach for Comparing Model Performance

    Science.gov (United States)

    Asadzadeh, M.; Maclean, A.; Tolson, B. A.; Burn, D. H.

    2009-05-01

    Hydrologic model calibration aims to find a set of parameters that adequately simulates observations of watershed behavior, such as streamflow, or a state variable, such as snow water equivalent (SWE). There are different metrics for evaluating calibration effectiveness that involve quantifying prediction errors, such as the Nash-Sutcliffe (NS) coefficient and bias evaluated for the entire calibration period, on a seasonal basis, for low flows, or for high flows. Many of these metrics are conflicting such that the set of parameters that maximizes the high flow NS differs from the set of parameters that maximizes the low flow NS. Conflicting objectives are very likely when different calibration objectives are based on different fluxes and/or state variables (e.g., NS based on streamflow versus SWE). One of the most popular ways to balance different metrics is to aggregate them based on their importance and find the set of parameters that optimizes a weighted sum of the efficiency metrics. Comparing alternative hydrologic models (e.g., assessing model improvement when a process or more detail is added to the model) based on the aggregated objective might be misleading since it represents one point on the tradeoff of desired error metrics. To derive a more comprehensive model comparison, we solved a bi-objective calibration problem to estimate the tradeoff between two error metrics for each model. Although this approach is computationally more expensive than the aggregation approach, it results in a better understanding of the effectiveness of selected models at each level of every error metric and therefore provides a better rationale for judging relative model quality. The two alternative models used in this study are two MESH hydrologic models (version 1.2) of the Wolf Creek Research basin that differ in their watershed spatial discretization (a single Grouped Response Unit, GRU, versus multiple GRUs). The MESH model, currently under development by Environment

  15. Dew Point modelling using GEP based multi objective optimization

    OpenAIRE

    Shroff, Siddharth; Dabhi, Vipul

    2013-01-01

    Different techniques are used to model the relationship between temperatures, dew point and relative humidity. Gene expression programming is capable of modelling complex realities with great accuracy, allowing at the same time, the extraction of knowledge from the evolved models compared to other learning algorithms. We aim to use Gene Expression Programming for modelling of dew point. Generally, accuracy of the model is the only objective used by selection mechanism of GEP. This will evolve...

  16. An object-oriented, coprocessor-accelerated model for ice sheet simulations

    Science.gov (United States)

    Seddik, H.; Greve, R.

    2013-12-01

    Recently, numerous models capable of modeling the thermo-dynamics of ice sheets have been developed within the ice sheet modeling community. Their capabilities have been characterized by a wide range of features with different numerical methods (finite difference or finite element), different implementations of the ice flow mechanics (shallow-ice, higher-order, full Stokes) and different treatments for the basal and coastal areas (basal hydrology, basal sliding, ice shelves). Shallow-ice models (SICOPOLIS, IcIES, PISM, etc) have been widely used for modeling whole ice sheets (Greenland and Antarctica) due to the relatively low computational cost of the shallow-ice approximation but higher order (ISSM, AIF) and full Stokes (Elmer/Ice) models have been recently used to model the Greenland ice sheet. The advance in processor speed and the decrease in cost for accessing large amount of memory and storage have undoubtedly been the driving force in the commoditization of models with higher capabilities, and the popularity of Elmer/Ice (http://elmerice.elmerfem.com) with an active user base is a notable representation of this trend. Elmer/Ice is a full Stokes model built on top of the multi-physics package Elmer (http://www.csc.fi/english/pages/elmer) which provides the full machinery for the complex finite element procedure and is fully parallel (mesh partitioning with OpenMPI communication). Elmer is mainly written in Fortran 90 and targets essentially traditional processors as the code base was not initially written to run on modern coprocessors (yet adding support for the recently introduced x86 based coprocessors is possible). Furthermore, a truly modular and object-oriented implementation is required for quick adaptation to fast evolving capabilities in hardware (Fortran 2003 provides an object-oriented programming model while not being clean and requiring a tricky refactoring of Elmer code). In this work, the object-oriented, coprocessor-accelerated finite element

  17. The Game Object Model and expansive learning: Creation ...

    African Journals Online (AJOL)

    The Game Object Model and expansive learning: Creation, instantiation, ... The aim of the paper is to develop insights into the design, integration, evaluation and use of video games in learning and teaching. ... AJOL African Journals Online.

  18. Development and investigation of aggregate models for nuclear objects with time shifts

    International Nuclear Information System (INIS)

    Gharakhanlou, J.; Kazachkov, I.V.

    2012-01-01

    The development and investigation of aggregate models for nuclear objects with shift arguments are discussed.The nonlinear differential equations of the model are described and the Cauchy problem is stated. The specific feature of the mathematical model for potentially hazardous nuclear objects are analyzed and computer simulation is presented

  19. A General Polygon-based Deformable Model for Object Recognition

    DEFF Research Database (Denmark)

    Jensen, Rune Fisker; Carstensen, Jens Michael

    1999-01-01

    We propose a general scheme for object localization and recognition based on a deformable model. The model combines shape and image properties by warping a arbitrary prototype intensity template according to the deformation in shape. The shape deformations are constrained by a probabilistic distr...

  20. Virtual enterprise model for the electronic components business in the Nuclear Weapons Complex

    Energy Technology Data Exchange (ETDEWEB)

    Ferguson, T.J.; Long, K.S.; Sayre, J.A. [Sandia National Labs., Albuquerque, NM (United States); Hull, A.L. [Sandia National Labs., Livermore, CA (United States); Carey, D.A.; Sim, J.R.; Smith, M.G. [Allied-Signal Aerospace Co., Kansas City, MO (United States). Kansas City Div.

    1994-08-01

    The electronic components business within the Nuclear Weapons Complex spans organizational and Department of Energy contractor boundaries. An assessment of the current processes indicates a need for fundamentally changing the way electronic components are developed, procured, and manufactured. A model is provided based on a virtual enterprise that recognizes distinctive competencies within the Nuclear Weapons Complex and at the vendors. The model incorporates changes that reduce component delivery cycle time and improve cost effectiveness while delivering components of the appropriate quality.

  1. Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    Science.gov (United States)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    This research is aimed at developing a neiv and advanced simulation framework that will significantly improve the overall efficiency of aerospace systems design and development. This objective will be accomplished through an innovative integration of object-oriented and Web-based technologies ivith both new and proven simulation methodologies. The basic approach involves Ihree major areas of research: Aerospace system and component representation using a hierarchical object-oriented component model which enables the use of multimodels and enforces component interoperability. Collaborative software environment that streamlines the process of developing, sharing and integrating aerospace design and analysis models. . Development of a distributed infrastructure which enables Web-based exchange of models to simplify the collaborative design process, and to support computationally intensive aerospace design and analysis processes. Research for the first year dealt with the design of the basic architecture and supporting infrastructure, an initial implementation of that design, and a demonstration of its application to an example aircraft engine system simulation.

  2. Modeling the evaporation of sessile multi-component droplets

    NARCIS (Netherlands)

    Diddens, C.; Kuerten, Johannes G.M.; van der Geld, C.W.M.; Wijshoff, H.M.A.

    2017-01-01

    We extended a mathematical model for the drying of sessile droplets, based on the lubrication approximation, to binary mixture droplets. This extension is relevant for e.g. inkjet printing applications, where ink consisting of several components are used. The extension involves the generalization of

  3. Integration of the Gene Ontology into an object-oriented architecture

    Directory of Open Access Journals (Sweden)

    Zheng W Jim

    2005-05-01

    Full Text Available Abstract Background To standardize gene product descriptions, a formal vocabulary defined as the Gene Ontology (GO has been developed. GO terms have been categorized into biological processes, molecular functions, and cellular components. However, there is no single representation that integrates all the terms into one cohesive model. Furthermore, GO definitions have little information explaining the underlying architecture that forms these terms, such as the dynamic and static events occurring in a process. In contrast, object-oriented models have been developed to show dynamic and static events. A portion of the TGF-beta signaling pathway, which is involved in numerous cellular events including cancer, differentiation and development, was used to demonstrate the feasibility of integrating the Gene Ontology into an object-oriented model. Results Using object-oriented models we have captured the static and dynamic events that occur during a representative GO process, "transforming growth factor-beta (TGF-beta receptor complex assembly" (GO:0007181. Conclusion We demonstrate that the utility of GO terms can be enhanced by object-oriented technology, and that the GO terms can be integrated into an object-oriented model by serving as a basis for the generation of object functions and attributes.

  4. Binding Objects to Locations: The Relationship between Object Files and Visual Working Memory

    Science.gov (United States)

    Hollingworth, Andrew; Rasmussen, Ian P.

    2010-01-01

    The relationship between object files and visual working memory (VWM) was investigated in a new paradigm combining features of traditional VWM experiments (color change detection) and object-file experiments (memory for the properties of moving objects). Object-file theory was found to account for a key component of object-position binding in VWM:…

  5. Multi-objective optimization of GENIE Earth system models.

    Science.gov (United States)

    Price, Andrew R; Myerscough, Richard J; Voutchkov, Ivan I; Marsh, Robert; Cox, Simon J

    2009-07-13

    The tuning of parameters in climate models is essential to provide reliable long-term forecasts of Earth system behaviour. We apply a multi-objective optimization algorithm to the problem of parameter estimation in climate models. This optimization process involves the iterative evaluation of response surface models (RSMs), followed by the execution of multiple Earth system simulations. These computations require an infrastructure that provides high-performance computing for building and searching the RSMs and high-throughput computing for the concurrent evaluation of a large number of models. Grid computing technology is therefore essential to make this algorithm practical for members of the GENIE project.

  6. Three-dimensional model-based object recognition and segmentation in cluttered scenes.

    Science.gov (United States)

    Mian, Ajmal S; Bennamoun, Mohammed; Owens, Robyn

    2006-10-01

    Viewpoint independent recognition of free-form objects and their segmentation in the presence of clutter and occlusions is a challenging task. We present a novel 3D model-based algorithm which performs this task automatically and efficiently. A 3D model of an object is automatically constructed offline from its multiple unordered range images (views). These views are converted into multidimensional table representations (which we refer to as tensors). Correspondences are automatically established between these views by simultaneously matching the tensors of a view with those of the remaining views using a hash table-based voting scheme. This results in a graph of relative transformations used to register the views before they are integrated into a seamless 3D model. These models and their tensor representations constitute the model library. During online recognition, a tensor from the scene is simultaneously matched with those in the library by casting votes. Similarity measures are calculated for the model tensors which receive the most votes. The model with the highest similarity is transformed to the scene and, if it aligns accurately with an object in the scene, that object is declared as recognized and is segmented. This process is repeated until the scene is completely segmented. Experiments were performed on real and synthetic data comprised of 55 models and 610 scenes and an overall recognition rate of 95 percent was achieved. Comparison with the spin images revealed that our algorithm is superior in terms of recognition rate and efficiency.

  7. A model of objects based on KKS for the processing of alarms at the Angra 2 nuclear power plant

    International Nuclear Information System (INIS)

    Silva, Paulo Adriano da

    2000-03-01

    The purpose of this work is to present a new model of the alarm annunciation system of the Angra 2 nuclear power plant, using concepts of object based modeling and having as basic the Angra 2 Systems and Components Identification System - KKS. The present structure of the Computerized Alarm System - CAS of Angra 2 does not permit a fast visualization of the incoming alarms in case that a great number of them go off, because the monitors can only show 7 indications at a time. The herein proposed model permits a fast identification of the generated alarms, making possible for the operator to have a general view of the current nuclear power plant status. Its managing tree structure has an hierarchical dependence among its nodes, from where, the presently activated alarms are shown. Its man-machine interface is easy interaction and understand because it is based on structure well known by the Angra 2 operators which is the Angra 2 Systems and Components Identification System - KKS. The project was implemented in the format of an Angra 2 Alarms Supervision System (SSAA), and, for purpose of simulation, 5 system of the Angra 2 Nuclear Power Plant have been chosen. The data used in the project like measurement KKS, measurement limits, unity, setpoints, alarms text and systems flow diagrams, are actual data of the Angra 2 Nuclear Power Plant. The Visual Basic programming Language has been used, with emphasis to the object oriented programming, which and modification, without modifying the program code. Event hough using the Visual Basic for programming, the model has shown, for its purpose, a satisfactory real time execution. (author)

  8. Sub-component modeling for face image reconstruction in video communications

    Science.gov (United States)

    Shiell, Derek J.; Xiao, Jing; Katsaggelos, Aggelos K.

    2008-08-01

    Emerging communications trends point to streaming video as a new form of content delivery. These systems are implemented over wired systems, such as cable or ethernet, and wireless networks, cell phones, and portable game systems. These communications systems require sophisticated methods of compression and error-resilience encoding to enable communications across band-limited and noisy delivery channels. Additionally, the transmitted video data must be of high enough quality to ensure a satisfactory end-user experience. Traditionally, video compression makes use of temporal and spatial coherence to reduce the information required to represent an image. In many communications systems, the communications channel is characterized by a probabilistic model which describes the capacity or fidelity of the channel. The implication is that information is lost or distorted in the channel, and requires concealment on the receiving end. We demonstrate a generative model based transmission scheme to compress human face images in video, which has the advantages of a potentially higher compression ratio, while maintaining robustness to errors and data corruption. This is accomplished by training an offline face model and using the model to reconstruct face images on the receiving end. We propose a sub-component AAM modeling the appearance of sub-facial components individually, and show face reconstruction results under different types of video degradation using a weighted and non-weighted version of the sub-component AAM.

  9. Modelling Creativity: Identifying Key Components through a Corpus-Based Approach.

    Science.gov (United States)

    Jordanous, Anna; Keller, Bill

    2016-01-01

    Creativity is a complex, multi-faceted concept encompassing a variety of related aspects, abilities, properties and behaviours. If we wish to study creativity scientifically, then a tractable and well-articulated model of creativity is required. Such a model would be of great value to researchers investigating the nature of creativity and in particular, those concerned with the evaluation of creative practice. This paper describes a unique approach to developing a suitable model of how creative behaviour emerges that is based on the words people use to describe the concept. Using techniques from the field of statistical natural language processing, we identify a collection of fourteen key components of creativity through an analysis of a corpus of academic papers on the topic. Words are identified which appear significantly often in connection with discussions of the concept. Using a measure of lexical similarity to help cluster these words, a number of distinct themes emerge, which collectively contribute to a comprehensive and multi-perspective model of creativity. The components provide an ontology of creativity: a set of building blocks which can be used to model creative practice in a variety of domains. The components have been employed in two case studies to evaluate the creativity of computational systems and have proven useful in articulating achievements of this work and directions for further research.

  10. Generalization of risk concept in case risk components depend on time

    International Nuclear Information System (INIS)

    Volkov, Yu.V.

    2006-01-01

    Ratios of risk assessments vs. nuclear technologies objects have been obtained for cases when that kind of risk components as accident probability and the consequent damage depend on time. Such generalization of risk concept ensures new possibilities for performing the probabilistic safety analysis which have been demonstrated with simple models in the present paper. As an example safety of radioactive storage with one-component activity has been analyzed with a very simple model [ru

  11. Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components

    KAUST Repository

    Zhang, Saijuan; Krebs-Smith, Susan M.; Midthune, Douglas; Perez, Adriana; Buckman, Dennis W.; Kipnis, Victor; Freedman, Laurence S.; Dodd, Kevin W.; Carroll, Raymond J

    2011-01-01

    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole

  12. Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components

    KAUST Repository

    Zhang, Saijuan

    2011-01-06

    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole

  13. Automation of program model developing for complex structure control objects

    International Nuclear Information System (INIS)

    Ivanov, A.P.; Sizova, T.B.; Mikhejkina, N.D.; Sankovskij, G.A.; Tyufyagin, A.N.

    1991-01-01

    A brief description of software for automated developing the models of integrating modular programming system, program module generator and program module library providing thermal-hydraulic calcualtion of process dynamics in power unit equipment components and on-line control system operation simulation is given. Technical recommendations for model development are based on experience in creation of concrete models of NPP power units. 8 refs., 1 tab., 4 figs

  14. Unsupervised Object Modeling and Segmentation with Symmetry Detection for Human Activity Recognition

    Directory of Open Access Journals (Sweden)

    Jui-Yuan Su

    2015-04-01

    Full Text Available In this paper we present a novel unsupervised approach to detecting and segmenting objects as well as their constituent symmetric parts in an image. Traditional unsupervised image segmentation is limited by two obvious deficiencies: the object detection accuracy degrades with the misaligned boundaries between the segmented regions and the target, and pre-learned models are required to group regions into meaningful objects. To tackle these difficulties, the proposed approach aims at incorporating the pair-wise detection of symmetric patches to achieve the goal of segmenting images into symmetric parts. The skeletons of these symmetric parts then provide estimates of the bounding boxes to locate the target objects. Finally, for each detected object, the graphcut-based segmentation algorithm is applied to find its contour. The proposed approach has significant advantages: no a priori object models are used, and multiple objects are detected. To verify the effectiveness of the approach based on the cues that a face part contains an oval shape and skin colors, human objects are extracted from among the detected objects. The detected human objects and their parts are finally tracked across video frames to capture the object part movements for learning the human activity models from video clips. Experimental results show that the proposed method gives good performance on publicly available datasets.

  15. A four-component model of age-related memory change.

    Science.gov (United States)

    Healey, M Karl; Kahana, Michael J

    2016-01-01

    We develop a novel, computationally explicit, theory of age-related memory change within the framework of the context maintenance and retrieval (CMR2) model of memory search. We introduce a set of benchmark findings from the free recall and recognition tasks that include aspects of memory performance that show both age-related stability and decline. We test aging theories by lesioning the corresponding mechanisms in a model fit to younger adult free recall data. When effects are considered in isolation, many theories provide an adequate account, but when all effects are considered simultaneously, the existing theories fail. We develop a novel theory by fitting the full model (i.e., allowing all parameters to vary) to individual participants and comparing the distributions of parameter values for older and younger adults. This theory implicates 4 components: (a) the ability to sustain attention across an encoding episode, (b) the ability to retrieve contextual representations for use as retrieval cues, (c) the ability to monitor retrievals and reject intrusions, and (d) the level of noise in retrieval competitions. We extend CMR2 to simulate a recognition memory task using the same mechanisms the free recall model uses to reject intrusions. Without fitting any additional parameters, the 4-component theory that accounts for age differences in free recall predicts the magnitude of age differences in recognition memory accuracy. Confirming a prediction of the model, free recall intrusion rates correlate positively with recognition false alarm rates. Thus, we provide a 4-component theory of a complex pattern of age differences across 2 key laboratory tasks. (c) 2015 APA, all rights reserved).

  16. Working memory contributes to the encoding of object location associations: Support for a 3-part model of object location memory.

    Science.gov (United States)

    Gillis, M Meredith; Garcia, Sarah; Hampstead, Benjamin M

    2016-09-15

    A recent model by Postma and colleagues posits that the encoding of object location associations (OLAs) requires the coordination of several cognitive processes mediated by ventral (object perception) and dorsal (spatial perception) visual pathways as well as the hippocampus (feature binding) [1]. Within this model, frontoparietal network recruitment is believed to contribute to both the spatial processing and working memory task demands. The current study used functional magnetic resonance imaging (fMRI) to test each step of this model in 15 participants who encoded OLAs and performed standard n-back tasks. As expected, object processing resulted in activation of the ventral visual stream. Object in location processing resulted in activation of both the ventral and dorsal visual streams as well as a lateral frontoparietal network. This condition was also the only one to result in medial temporal lobe activation, supporting its role in associative learning. A conjunction analysis revealed areas of shared activation between the working memory and object in location phase within the lateral frontoparietal network, anterior insula, and basal ganglia; consistent with prior working memory literature. Overall, findings support Postma and colleague's model and provide clear evidence for the role of working memory during OLA encoding. Published by Elsevier B.V.

  17. Hybrid time/frequency domain modeling of nonlinear components

    DEFF Research Database (Denmark)

    Wiechowski, Wojciech Tomasz; Lykkegaard, Jan; Bak, Claus Leth

    2007-01-01

    This paper presents a novel, three-phase hybrid time/frequency methodology for modelling of nonlinear components. The algorithm has been implemented in the DIgSILENT PowerFactory software using the DIgSILENT Programming Language (DPL), as a part of the work described in [1]. Modified HVDC benchmark...

  18. Polarized BRDF for coatings based on three-component assumption

    Science.gov (United States)

    Liu, Hong; Zhu, Jingping; Wang, Kai; Xu, Rong

    2017-02-01

    A pBRDF(polarized bidirectional reflection distribution function) model for coatings is given based on three-component reflection assumption in order to improve the polarized scattering simulation capability for space objects. In this model, the specular reflection is given based on microfacet theory, the multiple reflection and volume scattering are given separately according to experimental results. The polarization of specular reflection is considered from Fresnel's law, and both multiple reflection and volume scattering are assumed depolarized. Simulation and measurement results of two satellite coating samples SR107 and S781 are given to validate that the pBRDF modeling accuracy can be significantly improved by the three-component model given in this paper.

  19. Toward Self-Referential Autonomous Learning of Object and Situation Models.

    Science.gov (United States)

    Damerow, Florian; Knoblauch, Andreas; Körner, Ursula; Eggert, Julian; Körner, Edgar

    2016-01-01

    Most current approaches to scene understanding lack the capability to adapt object and situation models to behavioral needs not anticipated by the human system designer. Here, we give a detailed description of a system architecture for self-referential autonomous learning which enables the refinement of object and situation models during operation in order to optimize behavior. This includes structural learning of hierarchical models for situations and behaviors that is triggered by a mismatch between expected and actual action outcome. Besides proposing architectural concepts, we also describe a first implementation of our system within a simulated traffic scenario to demonstrate the feasibility of our approach.

  20. Modeling and Simulation of Grasping of Deformable Objects

    DEFF Research Database (Denmark)

    Fugl, Andreas Rune

    Automated robot solutions have for decades been increasing productivity around the world. They are attractive for being fast, accurate and able to work in dangerous and repetitive environments. In traditional applications the grasped object is kinematically attached to the Tool Center Point....... The purpose of this thesis is to address the modeling and simulation of deformable objects, as applied to robotic grasping and manipulation. The main contributions of this work are: An evaluation of 3D linear elasticity used for robot grasping as implemented by a Finite Difference Method supporting regular...

  1. Soft object deformation monitoring and learning for model-based robotic hand manipulation.

    Science.gov (United States)

    Cretu, Ana-Maria; Payeur, Pierre; Petriu, Emil M

    2012-06-01

    This paper discusses the design and implementation of a framework that automatically extracts and monitors the shape deformations of soft objects from a video sequence and maps them with force measurements with the goal of providing the necessary information to the controller of a robotic hand to ensure safe model-based deformable object manipulation. Measurements corresponding to the interaction force at the level of the fingertips and to the position of the fingertips of a three-finger robotic hand are associated with the contours of a deformed object tracked in a series of images using neural-network approaches. The resulting model captures the behavior of the object and is able to predict its behavior for previously unseen interactions without any assumption on the object's material. The availability of such models can contribute to the improvement of a robotic hand controller, therefore allowing more accurate and stable grasp while providing more elaborate manipulation capabilities for deformable objects. Experiments performed for different objects, made of various materials, reveal that the method accurately captures and predicts the object's shape deformation while the object is submitted to external forces applied by the robot fingers. The proposed method is also fast and insensitive to severe contour deformations, as well as to smooth changes in lighting, contrast, and background.

  2. OBJECT ORIENTED MODELLING, A MODELLING METHOD OF AN ECONOMIC ORGANIZATION ACTIVITY

    Directory of Open Access Journals (Sweden)

    TĂNĂSESCU ANA

    2014-05-01

    Full Text Available Now, most economic organizations use different information systems types in order to facilitate their activity. There are different methodologies, methods and techniques that can be used to design information systems. In this paper, I propose to present the advantages of using the object oriented modelling at the information system design of an economic organization. Thus, I have modelled the activity of a photo studio, using Visual Paradigm for UML as a modelling tool. For this purpose, I have identified the use cases for the analyzed system and I have presented the use case diagram. I have, also, realized the system static and dynamic modelling, through the most known UML diagrams.

  3. To the calculation of reduced cost capital component for power objects

    International Nuclear Information System (INIS)

    Andryushchenko, A.I.; Larin, E.A.

    1990-01-01

    The method for calculating capitalized cost component enabling comparison of alternative arrangement variants of power plant, is suggested. It is shown that in order to realize the technical-economical estimates in power industry for determination of capitalized cost component it is necessary to take into account capital construction expenditures as well as deductions for the plant dismountling and elimination of potential accidents

  4. MODELING OF CONVECTIVE STREAMS IN PNEUMOBASIC OBJECTS (Part 2

    Directory of Open Access Journals (Sweden)

    B. M. Khroustalev

    2015-01-01

    Full Text Available The article presents modeling for investigation of aerodynamic processes on area sections (including a group of complex constructional works for different regimes of drop and wind streams  and  temperature  conditions  and  in  complex  constructional  works  (for  different regimes of heating and ventilation. There were developed different programs for innovation problems solution in the field of heat and mass exchange in three-dimensional space of pres- sures-speeds-temperatures of оbjects.The field of uses of pneumobasic objects: construction and roof of tennis courts, hockey pitches, swimming pools , and also exhibitions’ buildings, circus buildings, cafes, aqua parks, studios, mobile objects of medical purposes, hangars, garages, construction sites, service sta- tions and etc. Advantages of such objects are the possibility and simplicity of multiple instal- lation and demolition works. Their large-scale implementation is determined by temperature- moisture conditions under the shells.Analytical and calculating researches, real researches of thermodynamic parameters of heat and mass exchange, multifactorial processes of air in pneumobasic objects, their shells in a wide range of climatic parameters of air (January – December in the Republic of Belarus, in many geographical latitudes of many countries have shown that the limit of the possibility of optimizing wind loads, heat flow, acoustic effects is infinite (sports, residential, industrial, warehouse, the military-technical units (tanks, airplanes, etc.. In modeling of convective flows in pneumobasic objects (part 1 there are processes with higher dynamic parameters of the air flow for the characteristic pneumobasic object, carried out the calculation of the velocity field, temperature, pressure at the speed of access of air through the inflow holes up to 5 m/sec at the moments of times (20, 100, 200, 400 sec. The calculation was performed using the developed mathematical

  5. THE INVESTMENT MODEL OF THE CONSTRUCTION OF PUBLIC OBJECTS

    Directory of Open Access Journals (Sweden)

    Reperger Šandor

    2009-11-01

    Full Text Available One of the possible models of the construction and use of sports objects, especi- ally indoor facilities (sports centres, halls, swimming pools, shooting alleys and others is the cooperation of the public and private sector, by the investment model of PPP (Pu- blic-Private Partnership. PPP (Public-Private Partnership construction is the new form of securing civil works, already known in the developed countries, in which the job of planning, construc- tion, functioning and financing is done by the private sector – in the scope of a precisely elaborated cooperation with the state. The state engages the private sector for the administering of the civil works. By public adverstisements and contests they will find the investors who accept the administe- ring of certain public works by themselves or with the help of project partners with their own resources (with 60-85% of bank loans, secure the conditions for conducting certain services (by using the objects, halls, etc until the expiration of the agreed deadline. The essence of PPP construction is the fact that an investor from the private sec- tor, chosen through a contest, realizes the project using its own means. The object beco- mes the property of the investor and it secures the regular functioning of the object with exclusive rights. The income from the functioning belongs to the investor, in return the costs of the functioning of the object, the upkeep, as well as the costs of the personnel and public utilities are the responsibility of the investor. The public use of the object is realised by the means that the authorised ministry and the partner from the contest in an agreement of the realization and functioning of the object accurately define the time of maintenance and the duration of the services on the behalf of social interest. From the time specified in the agreement the investor doesn’t charge precisely defined users for general and specific services. As Sebia, with all its

  6. Layout Optimization Model for the Production Planning of Precast Concrete Building Components

    Directory of Open Access Journals (Sweden)

    Dong Wang

    2018-05-01

    Full Text Available Precast concrete comprises the basic components of modular buildings. The efficiency of precast concrete building component production directly impacts the construction time and cost. In the processes of precast component production, mold setting has a significant influence on the production efficiency and cost, as well as reducing the resource consumption. However, the development of mold setting plans is left to the experience of production staff, with outcomes dependent on the quality of human skill and experience available. This can result in sub-optimal production efficiencies and resource wastage. Accordingly, in order to improve the efficiency of precast component production, this paper proposes an optimization model able to maximize the average utilization rate of pallets used during the molding process. The constraints considered were the order demand, the size of the pallet, layout methods, and the positional relationship of components. A heuristic algorithm was used to identify optimization solutions provided by the model. Through empirical analysis, and as exemplified in the case study, this research is significant in offering a prefabrication production planning model which improves pallet utilization rates, shortens component production time, reduces production costs, and improves the resource utilization. The results clearly demonstrate that the proposed method can facilitate the precast production plan providing strong practical implications for production planners.

  7. Modeling Dynamic Objects in Monte Carlo Particle Transport Calculations

    International Nuclear Information System (INIS)

    Yegin, G.

    2008-01-01

    In this study, the Multi-Geometry geometry modeling technique was improved in order to handle moving objects in a Monte Carlo particle transport calculation. In the Multi-Geometry technique, the geometry is a superposition of objects not surfaces. By using this feature, we developed a new algorithm which allows a user to make enable or disable geometry elements during particle transport. A disabled object can be ignored at a certain stage of a calculation and switching among identical copies of the same object located adjacent poins during a particle simulation corresponds to the movement of that object in space. We called this powerfull feature as Dynamic Multi-Geometry technique (DMG) which is used for the first time in Brachy Dose Monte Carlo code to simulate HDR brachytherapy treatment systems. Our results showed that having disabled objects in a geometry does not effect calculated dose values. This technique is also suitable to be used in other areas such as IMRT treatment planning systems

  8. Flexible Multi-Objective Transmission Expansion Planning with Adjustable Risk Aversion

    Directory of Open Access Journals (Sweden)

    Jing Qiu

    2017-07-01

    Full Text Available This paper presents a multi-objective transmission expansion planning (TEP framework. Rather than using the conventional deterministic reliability criterion, a risk component based on the probabilistic reliability criterion is incorporated into the TEP objectives. This risk component can capture the stochastic nature of power systems, such as load and wind power output variations, component availability, and incentive-based demand response (IBDR costs. Specifically, the formulation of risk value after risk aversion is explicitly given, and it aims to provide network planners with the flexibility to conduct risk analysis. Thus, a final expansion plan can be selected according to individual risk preferences. Moreover, the economic value of IBDR is modeled and integrated into the cost objective. In addition, a relatively new multi-objective evolutionary algorithm called the MOEA/D is introduced and employed to find Pareto optimal solutions, and tradeoffs between overall cost and risk are provided. The proposed approach is numerically verified on the Garver’s six-bus, IEEE 24-bus RTS and Polish 2383-bus systems. Case study results demonstrate that the proposed approach can effectively reduce cost and hedge risk in relation to increasing wind power integration.

  9. Multi-objective analytical model for optimal sizing of stand-alone photovoltaic water pumping systems

    International Nuclear Information System (INIS)

    Olcan, Ceyda

    2015-01-01

    Highlights: • An analytical optimal sizing model is proposed for PV water pumping systems. • The objectives are chosen as deficiency of power supply and life-cycle costs. • The crop water requirements are estimated for a citrus tree yard in Antalya. • The optimal tilt angles are calculated for fixed, seasonal and monthly changes. • The sizing results showed the validity of the proposed analytical model. - Abstract: Stand-alone photovoltaic (PV) water pumping systems effectively use solar energy for irrigation purposes in remote areas. However the random variability and unpredictability of solar energy makes difficult the penetration of PV implementations and complicate the system design. An optimal sizing of these systems proves to be essential. This paper recommends a techno-economic optimization model to determine optimally the capacity of the components of PV water pumping system using a water storage tank. The proposed model is developed regarding the reliability and cost indicators, which are the deficiency of power supply probability and life-cycle costs, respectively. The novelty is that the proposed optimization model is analytically defined for two-objectives and it is able to find a compromise solution. The sizing of a stand-alone PV water pumping system comprises a detailed analysis of crop water requirements and optimal tilt angles. Besides the necessity of long solar radiation and temperature time series, the accurate forecasts of water supply needs have to be determined. The calculation of the optimal tilt angle for yearly, seasonally and monthly frequencies results in higher system efficiency. It is, therefore, suggested to change regularly the tilt angle in order to maximize solar energy output. The proposed optimal sizing model incorporates all these improvements and can accomplish a comprehensive optimization of PV water pumping systems. A case study is conducted considering the irrigation of citrus trees yard located in Antalya, Turkey

  10. An object-oriented framework for application development and integration in hydroinformatics

    Energy Technology Data Exchange (ETDEWEB)

    Alfredsen, Knut Tore

    1999-03-01

    Computer-based simulation systems are commonly used as tools for planning and management of water resources. The scope of such tools is growing out of the traditional hydrologic/hydraulic modelling, and the need to integrate financial, ecological and other conditions has increased the complexity of the modelling systems. The field of integrating the hydrology and hydraulics with the socio-technical aspects is commonly referred to as hydro informatics. This report describes an object-oriented approach to build a platform for development and integration of modelling systems to form hydro informatics applications. Object-oriented analysis, design and implementation methods have gained momentum over the past decade as the chosen tool in many application areas. The component-based development method offers advantages in the form of a more integrated and real world true modelling process. Thus there is the opportunity to develop robust and reusable components and simplified maintenance and extendibility through a better modularization of the software. In a networked future the object-oriented methods also offer advantages in building distributed systems. Object-orientation has many levels of application in a hydro informatics system, from handling parts like data storage or user interfaces to being the method used for the complete development. Some examples of using object-oriented methods in the development of hydro informatics systems are discussed in this report. The development platform is built as an application framework with a special focus on extensibility and reuse of components. The framework consists of five sub parts: structural components describing the real world entities, computational elements for implementation of process models and linkage to external modelling systems, data handling classes, simulation control units, and a set of utility classes. Extensibility is maintained either through the use of inheritance from abstract classes defining the

  11. Are Face and Object Recognition Independent? A Neurocomputational Modeling Exploration.

    Science.gov (United States)

    Wang, Panqu; Gauthier, Isabel; Cottrell, Garrison

    2016-04-01

    Are face and object recognition abilities independent? Although it is commonly believed that they are, Gauthier et al. [Gauthier, I., McGugin, R. W., Richler, J. J., Herzmann, G., Speegle, M., & VanGulick, A. E. Experience moderates overlap between object and face recognition, suggesting a common ability. Journal of Vision, 14, 7, 2014] recently showed that these abilities become more correlated as experience with nonface categories increases. They argued that there is a single underlying visual ability, v, that is expressed in performance with both face and nonface categories as experience grows. Using the Cambridge Face Memory Test and the Vanderbilt Expertise Test, they showed that the shared variance between Cambridge Face Memory Test and Vanderbilt Expertise Test performance increases monotonically as experience increases. Here, we address why a shared resource across different visual domains does not lead to competition and to an inverse correlation in abilities? We explain this conundrum using our neurocomputational model of face and object processing ["The Model", TM, Cottrell, G. W., & Hsiao, J. H. Neurocomputational models of face processing. In A. J. Calder, G. Rhodes, M. Johnson, & J. Haxby (Eds.), The Oxford handbook of face perception. Oxford, UK: Oxford University Press, 2011]. We model the domain general ability v as the available computational resources (number of hidden units) in the mapping from input to label and experience as the frequency of individual exemplars in an object category appearing during network training. Our results show that, as in the behavioral data, the correlation between subordinate level face and object recognition accuracy increases as experience grows. We suggest that different domains do not compete for resources because the relevant features are shared between faces and objects. The essential power of experience is to generate a "spreading transform" for faces (separating them in representational space) that

  12. RF control at SSCL - an object oriented design approach

    International Nuclear Information System (INIS)

    Dohan, D.A.; Osberg, E.; Biggs, R.; Bossom, J.; Chillara, K.; Richter, R.; Wade, D.

    1994-01-01

    The Superconducting Super Collider (SSC) in Texas, the construction of which was stopped in 1994, would have represented a major challenge in accelerator research and development. This paper addresses the issues encountered in the parallel design and construction of the control systems for the RF equipment for the five accelerators comprising the SSC. An extensive analysis of the components of the RF control systems has been undertaken, based upon the Schlaer-Mellor object-oriented analysis and design (OOA/OOD) methodology. The RF subsystem components such as amplifiers, tubes, power supplies, PID loops, etc. were analyzed to produce OOA information, behavior and process models. Using these models, OOD was iteratively applied to develop a generic RF control system design. This paper describes the results of this analysis and the development of 'bridges' between the analysis objects, and the EPICS-based software and underlying VME-based hardware architectures. The application of this approach to several of the SSCL RF control systems is discussed. ((orig.))

  13. Critical Evaluation of the New Headway Advanced and the ILI Advanced Series: A Comparison of Curricular Components and CLT Objectives Based on ACTFL

    Directory of Open Access Journals (Sweden)

    Esmail Zare-Behtash

    2017-07-01

    Full Text Available The critical evaluation of systematic planning, development and review practices of instructional materials intend to improve the quality of teaching and learning. This study investigates the objectives of communicative language teaching and curricular components of two important textbooks which are widely studied in Iran: the New Headway Advanced Series (2015, the Iran Language Institute (ILI Advanced1 (2008. The evaluation is done in terms of two prospects; firstly, the interpretation of communicative language teaching objectives and secondly, curricular components of the books. To this aim, a checklist of 5 Cs standards and seven curricular components evaluation developed by American Council on the Teaching of Foreign Languages (ACTFL was employed. The evaluation reveals that the New Headway advanced series is more preferable and desirable than the ILI Advanced 1 due to the design and organization, authenticity, attractiveness, functionality, practicality and the other qualities mentioned above regarding communication, cultures, connection, comparison, and community in all aspects. The evaluation based on the seven curricular components- language systems, communication strategies, cultural knowledge, learning strategies, content from other subject areas, critical thinking skills, technology and the other features- indicates that the ILI textbook enjoys low standards and is not well developed in all components. The ILI textbook is highly reading and writing oriented and not appropriate for transactional and interactional learning purposes. This study acquaints language teachers and learners with the more desirable and cogent book.

  14. Object-oriented versus logical conventional implementation of a MMIIS

    Science.gov (United States)

    Forte, Anne-Marie; Bernadet, Maurice; Lavaire, Franck; Bizais, Yves J.

    1992-06-01

    The main components of a multimodality medical image interpretation system (MMIIS) are: (1) a user interface, (2) an image database, storing image objects along with their description, (3) expert systems (ES) in various medical imaging domains and particularly in image processing (IP), and (4) an IP actor, toolbox of standard IP procedures. To implement such a system, we are building two prototypes: one with an object-oriented (OO) expert system and one with a classical logical expert system. In these two different approaches, we have to model the medical imaging objects and represent them. Both approaches use an OO data model even if its implementation is different in: (1) the characteristics of each ES, in managing knowledge and inferences (uncertainty, non-monotonicity, backward and forward chaining, meta- knowledge), (2) the environment to implement the different experts and to activate IP procedures, and (3) the communication means between the experts and the other components. In the OO approach, an ES based on smalltalk is used, and in the conventional one an adhoc Prolog ES was built. Our goal is to compare their advantages and disadvantages in implementing a MMIIS.

  15. The Aalborg Model and management by objectives and resources

    DEFF Research Database (Denmark)

    Qvist, Palle; Spliid, Claus Monrad

    2010-01-01

    Model is successful has never been subject to a scientific study. An educational program in an HEI (Higher Education Institution) can be seen and understood as a system managed by objectives (MBO)5 within a given resource frame and based on an “agreement” between the student and the study board....... The student must achieve the objectives decided by the study board and that achievement is then documented with an exam. The study board supports the student with resources which helps them to fulfill the objectives. When the resources are divided into human, material and methodological resources...

  16. A fast mass spring model solver for high-resolution elastic objects

    Science.gov (United States)

    Zheng, Mianlun; Yuan, Zhiyong; Zhu, Weixu; Zhang, Guian

    2017-03-01

    Real-time simulation of elastic objects is of great importance for computer graphics and virtual reality applications. The fast mass spring model solver can achieve visually realistic simulation in an efficient way. Unfortunately, this method suffers from resolution limitations and lack of mechanical realism for a surface geometry model, which greatly restricts its application. To tackle these problems, in this paper we propose a fast mass spring model solver for high-resolution elastic objects. First, we project the complex surface geometry model into a set of uniform grid cells as cages through *cages mean value coordinate method to reflect its internal structure and mechanics properties. Then, we replace the original Cholesky decomposition method in the fast mass spring model solver with a conjugate gradient method, which can make the fast mass spring model solver more efficient for detailed surface geometry models. Finally, we propose a graphics processing unit accelerated parallel algorithm for the conjugate gradient method. Experimental results show that our method can realize efficient deformation simulation of 3D elastic objects with visual reality and physical fidelity, which has a great potential for applications in computer animation.

  17. A Deep-Structured Conditional Random Field Model for Object Silhouette Tracking.

    Directory of Open Access Journals (Sweden)

    Mohammad Javad Shafiee

    Full Text Available In this work, we introduce a deep-structured conditional random field (DS-CRF model for the purpose of state-based object silhouette tracking. The proposed DS-CRF model consists of a series of state layers, where each state layer spatially characterizes the object silhouette at a particular point in time. The interactions between adjacent state layers are established by inter-layer connectivity dynamically determined based on inter-frame optical flow. By incorporate both spatial and temporal context in a dynamic fashion within such a deep-structured probabilistic graphical model, the proposed DS-CRF model allows us to develop a framework that can accurately and efficiently track object silhouettes that can change greatly over time, as well as under different situations such as occlusion and multiple targets within the scene. Experiment results using video surveillance datasets containing different scenarios such as occlusion and multiple targets showed that the proposed DS-CRF approach provides strong object silhouette tracking performance when compared to baseline methods such as mean-shift tracking, as well as state-of-the-art methods such as context tracking and boosted particle filtering.

  18. Incremental principal component pursuit for video background modeling

    Science.gov (United States)

    Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt

    2017-03-14

    An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.

  19. Two-component mixture cure rate model with spline estimated nonparametric components.

    Science.gov (United States)

    Wang, Lu; Du, Pang; Liang, Hua

    2012-09-01

    In some survival analysis of medical studies, there are often long-term survivors who can be considered as permanently cured. The goals in these studies are to estimate the noncured probability of the whole population and the hazard rate of the susceptible subpopulation. When covariates are present as often happens in practice, to understand covariate effects on the noncured probability and hazard rate is of equal importance. The existing methods are limited to parametric and semiparametric models. We propose a two-component mixture cure rate model with nonparametric forms for both the cure probability and the hazard rate function. Identifiability of the model is guaranteed by an additive assumption that allows no time-covariate interactions in the logarithm of hazard rate. Estimation is carried out by an expectation-maximization algorithm on maximizing a penalized likelihood. For inferential purpose, we apply the Louis formula to obtain point-wise confidence intervals for noncured probability and hazard rate. Asymptotic convergence rates of our function estimates are established. We then evaluate the proposed method by extensive simulations. We analyze the survival data from a melanoma study and find interesting patterns for this study. © 2011, The International Biometric Society.

  20. Dynamic information processing states revealed through neurocognitive models of object semantics

    Science.gov (United States)

    Clarke, Alex

    2015-01-01

    Recognising objects relies on highly dynamic, interactive brain networks to process multiple aspects of object information. To fully understand how different forms of information about objects are represented and processed in the brain requires a neurocognitive account of visual object recognition that combines a detailed cognitive model of semantic knowledge with a neurobiological model of visual object processing. Here we ask how specific cognitive factors are instantiated in our mental processes and how they dynamically evolve over time. We suggest that coarse semantic information, based on generic shared semantic knowledge, is rapidly extracted from visual inputs and is sufficient to drive rapid category decisions. Subsequent recurrent neural activity between the anterior temporal lobe and posterior fusiform supports the formation of object-specific semantic representations – a conjunctive process primarily driven by the perirhinal cortex. These object-specific representations require the integration of shared and distinguishing object properties and support the unique recognition of objects. We conclude that a valuable way of understanding the cognitive activity of the brain is though testing the relationship between specific cognitive measures and dynamic neural activity. This kind of approach allows us to move towards uncovering the information processing states of the brain and how they evolve over time. PMID:25745632

  1. A mesoscopic reaction rate model for shock initiation of multi-component PBX explosives.

    Science.gov (United States)

    Liu, Y R; Duan, Z P; Zhang, Z Y; Ou, Z C; Huang, F L

    2016-11-05

    The primary goal of this research is to develop a three-term mesoscopic reaction rate model that consists of a hot-spot ignition, a low-pressure slow burning and a high-pressure fast reaction terms for shock initiation of multi-component Plastic Bonded Explosives (PBX). Thereinto, based on the DZK hot-spot model for a single-component PBX explosive, the hot-spot ignition term as well as its reaction rate is obtained through a "mixing rule" of the explosive components; new expressions for both the low-pressure slow burning term and the high-pressure fast reaction term are also obtained by establishing the relationships between the reaction rate of the multi-component PBX explosive and that of its explosive components, based on the low-pressure slow burning term and the high-pressure fast reaction term of a mesoscopic reaction rate model. Furthermore, for verification, the new reaction rate model is incorporated into the DYNA2D code to simulate numerically the shock initiation process of the PBXC03 and the PBXC10 multi-component PBX explosives, and the numerical results of the pressure histories at different Lagrange locations in explosive are found to be in good agreements with previous experimental data. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. A new model for reliability optimization of series-parallel systems with non-homogeneous components

    International Nuclear Information System (INIS)

    Feizabadi, Mohammad; Jahromi, Abdolhamid Eshraghniaye

    2017-01-01

    In discussions related to reliability optimization using redundancy allocation, one of the structures that has attracted the attention of many researchers, is series-parallel structure. In models previously presented for reliability optimization of series-parallel systems, there is a restricting assumption based on which all components of a subsystem must be homogeneous. This constraint limits system designers in selecting components and prevents achieving higher levels of reliability. In this paper, a new model is proposed for reliability optimization of series-parallel systems, which makes possible the use of non-homogeneous components in each subsystem. As a result of this flexibility, the process of supplying system components will be easier. To solve the proposed model, since the redundancy allocation problem (RAP) belongs to the NP-hard class of optimization problems, a genetic algorithm (GA) is developed. The computational results of the designed GA are indicative of high performance of the proposed model in increasing system reliability and decreasing costs. - Highlights: • In this paper, a new model is proposed for reliability optimization of series-parallel systems. • In the previous models, there is a restricting assumption based on which all components of a subsystem must be homogeneous. • The presented model provides a possibility for the subsystems’ components to be non- homogeneous in the required conditions. • The computational results demonstrate the high performance of the proposed model in improving reliability and reducing costs.

  3. Application of multiple objective models to water resources planning and management

    International Nuclear Information System (INIS)

    North, R.M.

    1993-01-01

    Over the past 30 years, we have seen the birth and growth of multiple objective analysis from an idea without tools to one with useful applications. Models have been developed and applications have been researched to address the multiple purposes and objectives inherent in the development and management of water resources. A practical approach to multiple objective modelling incorporates macroeconomic-based policies and expectations in order to optimize the results from both engineering (structural) and management (non-structural) alternatives, while taking into account the economic and environmental trade-offs. (author). 27 refs, 4 figs, 3 tabs

  4. Genetic and Psychosocial Predictors of Aggression: Variable Selection and Model Building With Component-Wise Gradient Boosting.

    Science.gov (United States)

    Suchting, Robert; Gowin, Joshua L; Green, Charles E; Walss-Bass, Consuelo; Lane, Scott D

    2018-01-01

    Rationale : Given datasets with a large or diverse set of predictors of aggression, machine learning (ML) provides efficient tools for identifying the most salient variables and building a parsimonious statistical model. ML techniques permit efficient exploration of data, have not been widely used in aggression research, and may have utility for those seeking prediction of aggressive behavior. Objectives : The present study examined predictors of aggression and constructed an optimized model using ML techniques. Predictors were derived from a dataset that included demographic, psychometric and genetic predictors, specifically FK506 binding protein 5 (FKBP5) polymorphisms, which have been shown to alter response to threatening stimuli, but have not been tested as predictors of aggressive behavior in adults. Methods : The data analysis approach utilized component-wise gradient boosting and model reduction via backward elimination to: (a) select variables from an initial set of 20 to build a model of trait aggression; and then (b) reduce that model to maximize parsimony and generalizability. Results : From a dataset of N = 47 participants, component-wise gradient boosting selected 8 of 20 possible predictors to model Buss-Perry Aggression Questionnaire (BPAQ) total score, with R 2 = 0.66. This model was simplified using backward elimination, retaining six predictors: smoking status, psychopathy (interpersonal manipulation and callous affect), childhood trauma (physical abuse and neglect), and the FKBP5_13 gene (rs1360780). The six-factor model approximated the initial eight-factor model at 99.4% of R 2 . Conclusions : Using an inductive data science approach, the gradient boosting model identified predictors consistent with previous experimental work in aggression; specifically psychopathy and trauma exposure. Additionally, allelic variants in FKBP5 were identified for the first time, but the relatively small sample size limits generality of results and calls for

  5. Wind field and trajectory models for tornado-propelled objects

    International Nuclear Information System (INIS)

    Anon

    1978-01-01

    This report contains the results of the second phase of a research program which has as its objective the development of a mathematical model to predict the trajectory of tornado-borne objects postulated to be in the vicinity of nuclear power plants. An improved tornado wind field model satisfies the no-slip ground boundary condition of fluid mechanics and includes the functional dependence of eddy viscosity with altitude. Sub-scale wind tunnel data are obtained for all of the missiles currently specified for nuclear plant design. Confirmatory full-scale data are obtained for a 12-inch pipe and automobile. The original six-degree-of-freedom trajectory model is modified to include the improved wind field and increased capability as to body shapes and inertial characteristics that can be handled. The improved trajectory model is used to calculate maximum credible speeds, which for all of the heavy missiles are considerably less than those currently specified for design. Equivalent coefficients for use in three-degree-of-freedom models are developed and the sensitivity of range and speed to various trajectory parameters for the 12-inch diameter pipe is examined

  6. Modeling the Object-Oriented Software Process: OPEN and the Unified Process

    NARCIS (Netherlands)

    van den Berg, Klaas; Aksit, Mehmet; van den Broek, P.M.

    A short introduction to software process modeling is presented, particularly object-oriented modeling. Two major industrial process models are discussed: the OPEN model and the Unified Process model. In more detail, the quality assurance in the Unified Process tool (formally called Objectory) is

  7. Solving a bi-objective mathematical programming model for bloodmobiles location routing problem

    Directory of Open Access Journals (Sweden)

    Masoud Rabbani

    2017-01-01

    Full Text Available Perishability of platelets, uncertainty of donors’ arrival and conflicting views in platelet supply chain have made platelet supply chain planning a problematic issue. In this paper, mobile blood collection system for platelet production is investigated. Two mathematical models are presented to cover the bloodmobile collection planning problem. The first model is a multi-objective fuzzy mathematical programming in which the bloodmobiles locations are considered with the aim of maximizing potential amount of blood collection and minimizing the operational cost. The second model is a vehicle routing problem with time windows which studies the shuttles routing problem. To tackle the first model, it is reformulated as a crisp multi objective linear programming model and then solved through a fuzzy multi objective programming approach. Several sensitivity analysis are conducted on important parameters to demonstrate the applicability of the proposed model. The proposed model is then solved by using a tailored Simulated Annealing (SA algorithm. The numerical results demonstrate promising efficiency of the proposed solution method.

  8. A model-based software development methodology for high-end automotive components

    NARCIS (Netherlands)

    Ravanan, Mahmoud

    2014-01-01

    This report provides a model-based software development methodology for high-end automotive components. The V-model is used as a process model throughout the development of the software platform. It offers a framework that simplifies the relation between requirements, design, implementation,

  9. A new Object-oriented Interface to MDSplus

    Energy Technology Data Exchange (ETDEWEB)

    Manduchi, G. [Associazione EURATOM-ENEA sulla Fusione, Padua (Italy); Fredian, T. [MIT Plasma Science and Fusion Center, Littleton, NH (United States); Stillerman, J. [MIT Plasma Science and Fusion Center, Cambridge, MA (United States)

    2009-07-01

    The MDSplus data acquisition and management software package is widely used in the international fusion research community. Its core Application Programming Interface (API) remained unchanged since the system was ported to a multi platform environment in the late nineties. Originally written in C, the MDSplus API did not fully exploit several object-oriented features of the system that were included in the original architecture. In 2008 a project was initiated by the authors to provide the core MDSplus functionality with an object-oriented API. A generic, language-independent class structure has been defined and modeled in UML. Based on this description the new API has been implemented so far in C++, Python, and Java. Fortran 90 and Matlab interfaces are foreseen in 2009. The new API provides: - Data Type Management, allowing the full exploitation of the rich set of data types defined in MDSplus by means of composition of data object instances; - Pulse file access, for writing and reading data objects as well as managing database components properties. The definition of a language-independent class organization allows the MDSplus Object API be consistent across all the object oriented languages that will be supported. Regardless of the language used, this approach provides a much more natural programming interaction with MDSplus. Moreover, the UML graphical definition proved an effective and unambiguous way of documenting the system components. This document is composed of an abstract followed by the presentation transparencies. (authors)

  10. Synchrotron-Based Microspectroscopic Analysis of Molecular and Biopolymer Structures Using Multivariate Techniques and Advanced Multi-Components Modeling

    International Nuclear Information System (INIS)

    Yu, P.

    2008-01-01

    More recently, advanced synchrotron radiation-based bioanalytical technique (SRFTIRM) has been applied as a novel non-invasive analysis tool to study molecular, functional group and biopolymer chemistry, nutrient make-up and structural conformation in biomaterials. This novel synchrotron technique, taking advantage of bright synchrotron light (which is million times brighter than sunlight), is capable of exploring the biomaterials at molecular and cellular levels. However, with the synchrotron RFTIRM technique, a large number of molecular spectral data are usually collected. The objective of this article was to illustrate how to use two multivariate statistical techniques: (1) agglomerative hierarchical cluster analysis (AHCA) and (2) principal component analysis (PCA) and two advanced multicomponent modeling methods: (1) Gaussian and (2) Lorentzian multi-component peak modeling for molecular spectrum analysis of bio-tissues. The studies indicated that the two multivariate analyses (AHCA, PCA) are able to create molecular spectral corrections by including not just one intensity or frequency point of a molecular spectrum, but by utilizing the entire spectral information. Gaussian and Lorentzian modeling techniques are able to quantify spectral omponent peaks of molecular structure, functional group and biopolymer. By application of these four statistical methods of the multivariate techniques and Gaussian and Lorentzian modeling, inherent molecular structures, functional group and biopolymer onformation between and among biological samples can be quantified, discriminated and classified with great efficiency.

  11. Emission line spectra of Herbig-Haro objects

    International Nuclear Information System (INIS)

    Brugel, E.W.; Boehm, K.H.; Mannery, E.

    1981-01-01

    Spectrophotometric data have been obtained for 12 Herbig-Haro nebulae with the multichannel spectrometer on the Mt. Palomar 5.08 m telescope and with the image intensified dissector scanner on the Kitt Peak 2.13 m telescope. Optical emission line fluxes are presented for the following Herbig-Haro objects: H-H 1 (NW), H-H 1 (SE), H-H 2A, H-H 2G, H-H 2H, H-H 3, H-H 7, H-H 11, H-H 24A, H-H 30, H-H 32, and H-H 40. Values for the electron temperature and electron density have been determined for 10 of these condensations. Significant inhomogeneities in the line-forming regions of these H-H objects are indicated by the derived N/sub e/-T/sub e/ diagrams. Empirical two-component density models have been constructed to interpret the emission line spectra of the five brightest condensations. Slightly less satisfactory homogeneous models are presented for the remaining five objects

  12. Modeling the Object-Oriented Software Process: OPEN and the Unified Process

    OpenAIRE

    van den Berg, Klaas; Aksit, Mehmet; van den Broek, P.M.

    1999-01-01

    A short introduction to software process modeling is presented, particularly object-oriented modeling. Two major industrial process models are discussed: the OPEN model and the Unified Process model. In more detail, the quality assurance in the Unified Process tool (formally called Objectory) is reviewed.

  13. A Multi-Objective Trade-Off Model in Sustainable Construction Projects

    Directory of Open Access Journals (Sweden)

    Guangdong Wu

    2017-10-01

    Full Text Available Based on the consideration of the relative importance of sustainability-related objectives and the inherent nature of sustainable construction projects, this study proposes that the contractor can balance the levels of efforts and resources used to improve the overall project sustainability. A multi-objective trade-off model using game theory was established and verified through simulation and numerical example under a moral hazard situation. Results indicate that effort levels of the contractor on sustainability-related objectives are positively related to the outcome coefficient while negatively to the coefficients of effort cost of the relevant objectives. High levels of the relative importance of sustainability-related objectives contribute to high levels of effort of the contractor. With the variation in effort levels and the coefficient of benefit allocation, the project net benefit increases before declining. The function of project benefit has a marked peak value, with an inverted “U” shape. An equilibrium always exists as for the given relative importance and coefficients of the effort costs of sustainability-related objectives. Under this condition, the owner may offer the contractor a less intense incentive and motivate the contractor reasonably arranging input resources. The coefficient of benefit allocation is affected by the contractor characteristic factors and the project characteristic factors. The owner should balance these two types of factors and select the most appropriate incentive mechanism to improve the project benefit. Meanwhile, the contractor can balance the relative importance of the objectives and arrange the appropriate levels of effort and resources to achieve a sustainability-related objective. Very few studies have emphasized the effects of the relative importance of sustainability-related objectives on the benefits of sustainable construction projects. This study therefore builds a multi-objective trade

  14. Towards a semantic learning model fostering learning object reusability

    OpenAIRE

    Fernandes , Emmanuel; Madhour , Hend; Wentland Forte , Maia; Miniaoui , Sami

    2005-01-01

    We try in this paper to propose a domain model for both author's and learner's needs concerning learning objects reuse. First of all, we present four key criteria for an efficient authoring tool: adaptive level of granularity, flexibility, integration and interoperability. Secondly, we introduce and describe our six-level Semantic Learning Model (SLM) designed to facilitate multi-level reuse of learning materials and search by defining a multi-layer model for metadata. Finally, after mapping ...

  15. Computational needs for modelling accelerator components

    International Nuclear Information System (INIS)

    Hanerfeld, H.

    1985-06-01

    The particle-in-cell MASK is being used to model several different electron accelerator components. These studies are being used both to design new devices and to understand particle behavior within existing structures. Studies include the injector for the Stanford Linear Collider and the 50 megawatt klystron currently being built at SLAC. MASK is a 2D electromagnetic code which is being used by SLAC both on our own IBM 3081 and on the CRAY X-MP at the NMFECC. Our experience with running MASK illustrates the need for supercomputers to continue work of the kind described. 3 refs., 2 figs

  16. 2D Modeling and Classification of Extended Objects in a Network of HRR Radars

    NARCIS (Netherlands)

    Fasoula, A.

    2011-01-01

    In this thesis, the modeling of extended objects with low-dimensional representations of their 2D geometry is addressed. The ultimate objective is the classification of the objects using libraries of such compact 2D object models that are much smaller than in the state-of-the-art classification

  17. ROLE OF UML SEQUENCE DIAGRAM CONSTRUCTS IN OBJECT LIFECYCLE CONCEPT

    Directory of Open Access Journals (Sweden)

    Miroslav Grgec

    2007-06-01

    Full Text Available When modeling systems and using UML concepts, a real system can be viewed in several ways. The RUP (Rational Unified Process defines the "4 + 1 view": 1. Logical view (class diagram (CD, object diagram (OD, sequence diagram (SD, collaboration diagram (COD, state chart diagram (SCD, activity diagram (AD, 2.Process view (use case diagram, CD, OD, SD, COD, SCD, AD, 3. Development view (package diagram, component diagram, 4. Physical view (deployment diagram, and 5. Use case view (use case diagram, OD, SD, COD, SCD, AD which combines the four mentioned above. With sequence diagram constructs we are describing object behavior in scope of one use case and their interaction. Each object in system goes through a so called lifecycle (create, supplement object with data, use object, decommission object. The concept of the object lifecycle is used to understand and formalize the behavior of objects from creation to deletion. With help of sequence diagram concepts our paper will describe the way of interaction modeling between objects through lifeline of each of them, and their importance in software development.

  18. A multi-objective programming model for assessment the GHG emissions in MSW management

    Energy Technology Data Exchange (ETDEWEB)

    Mavrotas, George, E-mail: mavrotas@chemeng.ntua.gr [National Technical University of Athens, Iroon Polytechniou 9, Zografou, Athens, 15780 (Greece); Skoulaxinou, Sotiria [EPEM SA, 141 B Acharnon Str., Athens, 10446 (Greece); Gakis, Nikos [FACETS SA, Agiou Isidorou Str., Athens, 11471 (Greece); Katsouros, Vassilis [Athena Research and Innovation Center, Artemidos 6 and Epidavrou Str., Maroussi, 15125 (Greece); Georgopoulou, Elena [National Observatory of Athens, Thisio, Athens, 11810 (Greece)

    2013-09-15

    Highlights: • The multi-objective multi-period optimization model. • The solution approach for the generation of the Pareto front with mathematical programming. • The very detailed description of the model (decision variables, parameters, equations). • The use of IPCC 2006 guidelines for landfill emissions (first order decay model) in the mathematical programming formulation. - Abstract: In this study a multi-objective mathematical programming model is developed for taking into account GHG emissions for Municipal Solid Waste (MSW) management. Mathematical programming models are often used for structure, design and operational optimization of various systems (energy, supply chain, processes, etc.). The last twenty years they are used all the more often in Municipal Solid Waste (MSW) management in order to provide optimal solutions with the cost objective being the usual driver of the optimization. In our work we consider the GHG emissions as an additional criterion, aiming at a multi-objective approach. The Pareto front (Cost vs. GHG emissions) of the system is generated using an appropriate multi-objective method. This information is essential to the decision maker because he can explore the trade-offs in the Pareto curve and select his most preferred among the Pareto optimal solutions. In the present work a detailed multi-objective, multi-period mathematical programming model is developed in order to describe the waste management problem. Apart from the bi-objective approach, the major innovations of the model are (1) the detailed modeling considering 34 materials and 42 technologies, (2) the detailed calculation of the energy content of the various streams based on the detailed material balances, and (3) the incorporation of the IPCC guidelines for the CH{sub 4} generated in the landfills (first order decay model). The equations of the model are described in full detail. Finally, the whole approach is illustrated with a case study referring to the

  19. A multi-objective programming model for assessment the GHG emissions in MSW management

    International Nuclear Information System (INIS)

    Mavrotas, George; Skoulaxinou, Sotiria; Gakis, Nikos; Katsouros, Vassilis; Georgopoulou, Elena

    2013-01-01

    Highlights: • The multi-objective multi-period optimization model. • The solution approach for the generation of the Pareto front with mathematical programming. • The very detailed description of the model (decision variables, parameters, equations). • The use of IPCC 2006 guidelines for landfill emissions (first order decay model) in the mathematical programming formulation. - Abstract: In this study a multi-objective mathematical programming model is developed for taking into account GHG emissions for Municipal Solid Waste (MSW) management. Mathematical programming models are often used for structure, design and operational optimization of various systems (energy, supply chain, processes, etc.). The last twenty years they are used all the more often in Municipal Solid Waste (MSW) management in order to provide optimal solutions with the cost objective being the usual driver of the optimization. In our work we consider the GHG emissions as an additional criterion, aiming at a multi-objective approach. The Pareto front (Cost vs. GHG emissions) of the system is generated using an appropriate multi-objective method. This information is essential to the decision maker because he can explore the trade-offs in the Pareto curve and select his most preferred among the Pareto optimal solutions. In the present work a detailed multi-objective, multi-period mathematical programming model is developed in order to describe the waste management problem. Apart from the bi-objective approach, the major innovations of the model are (1) the detailed modeling considering 34 materials and 42 technologies, (2) the detailed calculation of the energy content of the various streams based on the detailed material balances, and (3) the incorporation of the IPCC guidelines for the CH 4 generated in the landfills (first order decay model). The equations of the model are described in full detail. Finally, the whole approach is illustrated with a case study referring to the application

  20. Combining satellite data and appropriate objective functions for improved spatial pattern performance of a distributed hydrologic model

    Science.gov (United States)

    Demirel, Mehmet C.; Mai, Juliane; Mendiguren, Gorka; Koch, Julian; Samaniego, Luis; Stisen, Simon

    2018-02-01

    Satellite-based earth observations offer great opportunities to improve spatial model predictions by means of spatial-pattern-oriented model evaluations. In this study, observed spatial patterns of actual evapotranspiration (AET) are utilised for spatial model calibration tailored to target the pattern performance of the model. The proposed calibration framework combines temporally aggregated observed spatial patterns with a new spatial performance metric and a flexible spatial parameterisation scheme. The mesoscale hydrologic model (mHM) is used to simulate streamflow and AET and has been selected due to its soil parameter distribution approach based on pedo-transfer functions and the build in multi-scale parameter regionalisation. In addition two new spatial parameter distribution options have been incorporated in the model in order to increase the flexibility of root fraction coefficient and potential evapotranspiration correction parameterisations, based on soil type and vegetation density. These parameterisations are utilised as they are most relevant for simulated AET patterns from the hydrologic model. Due to the fundamental challenges encountered when evaluating spatial pattern performance using standard metrics, we developed a simple but highly discriminative spatial metric, i.e. one comprised of three easily interpretable components measuring co-location, variation and distribution of the spatial data. The study shows that with flexible spatial model parameterisation used in combination with the appropriate objective functions, the simulated spatial patterns of actual evapotranspiration become substantially more similar to the satellite-based estimates. Overall 26 parameters are identified for calibration through a sequential screening approach based on a combination of streamflow and spatial pattern metrics. The robustness of the calibrations is tested using an ensemble of nine calibrations based on different seed numbers using the shuffled complex

  1. A new object-oriented interface to MDSplus

    International Nuclear Information System (INIS)

    Manduchi, G.; Fredian, T.; Stillerman, J.

    2010-01-01

    The MDSplus data acquisition and management software package is widely used in the international fusion research community. Its core Application Programming Interface (API) remained unchanged since the system was ported to a multiplatform environment in the late nineties. Originally written in C, the MDSplus API did not fully exploit several object-oriented features of the system that were included in the original architecture. In 2008 a project was initiated by the authors to provide the core MDSplus functionality with an object-oriented API. A generic, language-independent class structure has been defined and modeled in Uniform Modeling Language (UML). Based on this description the new API has been implemented so far in C++, Python, and Java. The new API provides data type management, allowing the full exploitation of the rich set of data types defined in MDSplus by means of composition of data object instances, and pulse file access, for writing and reading data objects as well as managing database components properties. The definition of a language-independent class organization allows the MDSplus object-oriented API to be consistent across all the object oriented languages that will be supported. Regardless of the language used, this approach provides a much more natural programming interaction with MDSplus.

  2. A new object-oriented interface to MDSplus

    Energy Technology Data Exchange (ETDEWEB)

    Manduchi, G., E-mail: gabriele.manduchi@igi.cnr.i [Consorzio RFX, Euratom-ENEA Association, Corso Stati Uniti 4, Padova 35127 (Italy); Fredian, T.; Stillerman, J. [Massachusetts Institute of Technology, 175 Albany Street, Cambridge, MA 02139 (United States)

    2010-07-15

    The MDSplus data acquisition and management software package is widely used in the international fusion research community. Its core Application Programming Interface (API) remained unchanged since the system was ported to a multiplatform environment in the late nineties. Originally written in C, the MDSplus API did not fully exploit several object-oriented features of the system that were included in the original architecture. In 2008 a project was initiated by the authors to provide the core MDSplus functionality with an object-oriented API. A generic, language-independent class structure has been defined and modeled in Uniform Modeling Language (UML). Based on this description the new API has been implemented so far in C++, Python, and Java. The new API provides data type management, allowing the full exploitation of the rich set of data types defined in MDSplus by means of composition of data object instances, and pulse file access, for writing and reading data objects as well as managing database components properties. The definition of a language-independent class organization allows the MDSplus object-oriented API to be consistent across all the object oriented languages that will be supported. Regardless of the language used, this approach provides a much more natural programming interaction with MDSplus.

  3. Traceable components of terrestrial carbon storage capacity in biogeochemical models.

    Science.gov (United States)

    Xia, Jianyang; Luo, Yiqi; Wang, Ying-Ping; Hararuk, Oleksandra

    2013-07-01

    Biogeochemical models have been developed to account for more and more processes, making their complex structures difficult to be understood and evaluated. Here, we introduce a framework to decompose a complex land model into traceable components based on mutually independent properties of modeled biogeochemical processes. The framework traces modeled ecosystem carbon storage capacity (Xss ) to (i) a product of net primary productivity (NPP) and ecosystem residence time (τE ). The latter τE can be further traced to (ii) baseline carbon residence times (τ'E ), which are usually preset in a model according to vegetation characteristics and soil types, (iii) environmental scalars (ξ), including temperature and water scalars, and (iv) environmental forcings. We applied the framework to the Australian Community Atmosphere Biosphere Land Exchange (CABLE) model to help understand differences in modeled carbon processes among biomes and as influenced by nitrogen processes. With the climate forcings of 1990, modeled evergreen broadleaf forest had the highest NPP among the nine biomes and moderate residence times, leading to a relatively high carbon storage capacity (31.5 kg cm(-2) ). Deciduous needle leaf forest had the longest residence time (163.3 years) and low NPP, leading to moderate carbon storage (18.3 kg cm(-2) ). The longest τE in deciduous needle leaf forest was ascribed to its longest τ'E (43.6 years) and small ξ (0.14 on litter/soil carbon decay rates). Incorporation of nitrogen processes into the CABLE model decreased Xss in all biomes via reduced NPP (e.g., -12.1% in shrub land) or decreased τE or both. The decreases in τE resulted from nitrogen-induced changes in τ'E (e.g., -26.7% in C3 grassland) through carbon allocation among plant pools and transfers from plant to litter and soil pools. Our framework can be used to facilitate data model comparisons and model intercomparisons via tracking a few traceable components for all terrestrial carbon

  4. Machine learning of frustrated classical spin models. I. Principal component analysis

    Science.gov (United States)

    Wang, Ce; Zhai, Hui

    2017-10-01

    This work aims at determining whether artificial intelligence can recognize a phase transition without prior human knowledge. If this were successful, it could be applied to, for instance, analyzing data from the quantum simulation of unsolved physical models. Toward this goal, we first need to apply the machine learning algorithm to well-understood models and see whether the outputs are consistent with our prior knowledge, which serves as the benchmark for this approach. In this work, we feed the computer data generated by the classical Monte Carlo simulation for the X Y model in frustrated triangular and union jack lattices, which has two order parameters and exhibits two phase transitions. We show that the outputs of the principal component analysis agree very well with our understanding of different orders in different phases, and the temperature dependences of the major components detect the nature and the locations of the phase transitions. Our work offers promise for using machine learning techniques to study sophisticated statistical models, and our results can be further improved by using principal component analysis with kernel tricks and the neural network method.

  5. Topical video object discovery from key frames by modeling word co-occurrence prior.

    Science.gov (United States)

    Zhao, Gangqiang; Yuan, Junsong; Hua, Gang; Yang, Jiong

    2015-12-01

    A topical video object refers to an object, that is, frequently highlighted in a video. It could be, e.g., the product logo and the leading actor/actress in a TV commercial. We propose a topic model that incorporates a word co-occurrence prior for efficient discovery of topical video objects from a set of key frames. Previous work using topic models, such as latent Dirichelet allocation (LDA), for video object discovery often takes a bag-of-visual-words representation, which ignored important co-occurrence information among the local features. We show that such data driven co-occurrence information from bottom-up can conveniently be incorporated in LDA with a Gaussian Markov prior, which combines top-down probabilistic topic modeling with bottom-up priors in a unified model. Our experiments on challenging videos demonstrate that the proposed approach can discover different types of topical objects despite variations in scale, view-point, color and lighting changes, or even partial occlusions. The efficacy of the co-occurrence prior is clearly demonstrated when compared with topic models without such priors.

  6. Component-Level Prognostics Health Management Framework for Passive Components - Advanced Reactor Technology Milestone: M2AT-15PN2301043

    Energy Technology Data Exchange (ETDEWEB)

    Ramuhalli, Pradeep; Roy, Surajit; Hirt, Evelyn H.; Prowant, Matthew S.; Pitman, Stan G.; Tucker, Joseph C.; Dib, Gerges; Pardini, Allan F.

    2015-06-19

    This report describes research results to date in support of the integration and demonstration of diagnostics technologies for prototypical advanced reactor passive components (to establish condition indices for monitoring) with model-based prognostics methods. Achieving this objective will necessitate addressing several of the research gaps and technical needs described in previous technical reports in this series.

  7. Definition of an Object-Oriented Modeling Language for Enterprise Architecture

    OpenAIRE

    Lê, Lam Son; Wegmann, Alain

    2005-01-01

    In enterprise architecture, the goal is to integrate business resources and IT resources in order to improve an enterprises competitiveness. In an enterprise architecture project, the development team usually constructs a model that represents the enterprise: the enterprise model. In this paper, we present a modeling language for building such enterprise models. Our enterprise models are hierarchical object-oriented representations of the enterprises. This paper presents the foundations of o...

  8. 3D MODELLING AND INTERACTIVE WEB-BASED VISUALIZATION OF CULTURAL HERITAGE OBJECTS

    Directory of Open Access Journals (Sweden)

    M. N. Koeva

    2016-06-01

    Full Text Available Nowadays, there are rapid developments in the fields of photogrammetry, laser scanning, computer vision and robotics, together aiming to provide highly accurate 3D data that is useful for various applications. In recent years, various LiDAR and image-based techniques have been investigated for 3D modelling because of their opportunities for fast and accurate model generation. For cultural heritage preservation and the representation of objects that are important for tourism and their interactive visualization, 3D models are highly effective and intuitive for present-day users who have stringent requirements and high expectations. Depending on the complexity of the objects for the specific case, various technological methods can be applied. The selected objects in this particular research are located in Bulgaria – a country with thousands of years of history and cultural heritage dating back to ancient civilizations. \\this motivates the preservation, visualisation and recreation of undoubtedly valuable historical and architectural objects and places, which has always been a serious challenge for specialists in the field of cultural heritage. In the present research, comparative analyses regarding principles and technological processes needed for 3D modelling and visualization are presented. The recent problems, efforts and developments in interactive representation of precious objects and places in Bulgaria are presented. Three technologies based on real projects are described: (1 image-based modelling using a non-metric hand-held camera; (2 3D visualization based on spherical panoramic images; (3 and 3D geometric and photorealistic modelling based on architectural CAD drawings. Their suitability for web-based visualization are demonstrated and compared. Moreover the possibilities for integration with additional information such as interactive maps, satellite imagery, sound, video and specific information for the objects are described. This

  9. Component- and system-level degradation modeling of digital Instrumentation and Control systems based on a Multi-State Physics Modeling Approach

    International Nuclear Information System (INIS)

    Wang, Wei; Di Maio, Francesco; Zio, Enrico

    2016-01-01

    Highlights: • A Multi-State Physics Modeling (MSPM) framework for reliability assessment is proposed. • Monte Carlo (MC) simulation is utilized to estimate the degradation state probability. • Due account is given to stochastic uncertainty and deterministic degradation progression. • The MSPM framework is applied to the reliability assessment of a digital I&C system. • Results are compared with the results obtained with a Markov Chain Model (MCM). - Abstract: A system-level degradation modeling is proposed for the reliability assessment of digital Instrumentation and Control (I&C) systems in Nuclear Power Plants (NPPs). At the component level, we focus on the reliability assessment of a Resistance Temperature Detector (RTD), which is an important digital I&C component used to guarantee the safe operation of NPPs. A Multi-State Physics Model (MSPM) is built to describe this component degradation progression towards failure and Monte Carlo (MC) simulation is used to estimate the probability of sojourn in any of the previously defined degradation states, by accounting for both stochastic and deterministic processes that affect the degradation progression. The MC simulation relies on an integrated modeling of stochastic processes with deterministic aging of components that results to be fundamental for estimating the joint cumulative probability distribution of finding the component in any of the possible degradation states. The results of the application of the proposed degradation model to a digital I&C system of literature are compared with the results obtained by a Markov Chain Model (MCM). The integrated stochastic-deterministic process here proposed to drive the MC simulation is viable to integrate component-level models into a system-level model that would consider inter-system or/and inter-component dependencies and uncertainties.

  10. Reliability Assessment of IGBT Modules Modeled as Systems with Correlated Components

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2013-01-01

    configuration. The estimated system reliability by the proposed method is a conservative estimate. Application of the suggested method could be extended for reliability estimation of systems composing of welding joints, bolts, bearings, etc. The reliability model incorporates the correlation between...... was applied for the systems failure functions estimation. It is desired to compare the results with the true system failure function, which is possible to estimate using simulation techniques. Theoretical model development should be applied for the further research. One of the directions for it might...... be modeling the system based on the Sequential Order Statistics, by considering the failure of the minimum (weakest component) at each loading level. The proposed idea to represent the system by the independent components could also be used for modeling reliability by Sequential Order Statistics....

  11. AVIATION SECURITY AS AN OBJECT OF MATHEMATICAL MODELING

    Directory of Open Access Journals (Sweden)

    N. Elisov Lev

    2017-01-01

    Full Text Available The paper presents a mathematical formulation of the problem formalization of the subject area related to aviation security in civil aviation. The formalization task is determined by the modern issue of providing aviation security. Aviationsecurity in modern systems is based upon organizational standard of security control. This standard doesn’t require calcu- lating the security level. It allows solving the aviation security task without estimating the solution and evaluating the per- formance of security facilities. The issue of acceptable aviation security level stays unsolved, because its control lies in inspections that determine whether the object security facilities meet the requirements or not. The pending problem is also in whether the requirements are calculable and the evaluation is subjective.Lately, there has been determined quite a certain tendency to consider aviation security issues from the perspective of its level optimal control with the following identification, calculation and evaluation problems solving and decision mak- ing. The obtained results analysis in this direction shows that it’s strongly recommended to move to object formalization problem, which provides a mathematical modeling for aviation security control optimization.In this case, the authors assume to find the answer in the process of object formalization. Therefore aviation secu- rity is presented as some security environment condition, which defines the parameters associated with the object protec-tion system quality that depends on the use of protective equipment in conditions of counteraction to factors of external andinternal threats. It is shown that the proposed model belongs to a class of boundary value problems described by differential equations in partial derivatives. The classification of boundary value problems is presented.

  12. The Effect of Multidimensional Motivation Interventions on Cognitive and Behavioral Components of Motivation: Testing Martin's Model

    Directory of Open Access Journals (Sweden)

    Fatemeh PooraghaRoodbarde

    2017-04-01

    Full Text Available Objective: The present study aimed at examining the effect of multidimensional motivation interventions based on Martin's model on cognitive and behavioral components of motivation.Methods: The research design was prospective with pretest, posttest, and follow-up, and 2 experimental groups. In this study, 90 students (45 participants in the experimental group and 45 in the control group constituted the sample of the study, and they were selected by available sampling method. Motivation interventions were implemented for fifteen 60-minute sessions 3 times a week, which lasted for about 2 months. Data were analyzed using repeated measures multivariate variance analysis test.Results: The findings revealed that multidimensional motivation interventions resulted in a significant increase in the scores of cognitive components such as self-efficacy, mastery goal, test anxiety, and feeling of lack of control, and behavioral components such as task management. The results of one-month follow-up indicated the stability of the created changes in test anxiety and cognitive strategies; however, no significant difference was found between the 2 groups at the follow-up in self-efficacy, mastery goals, source of control, and motivation.Conclusions: The research evidence indicated that academic motivation is a multidimensional component and is affected by cognitive and behavioral factors; therefore, researchers, teachers, and other authorities should attend to these factors to increase academic motivation.

  13. Prioritizing the refactoring need for critical component using combined approach

    Directory of Open Access Journals (Sweden)

    Rajni Sehgal

    2018-10-01

    Full Text Available One of the most promising strategies that will smooth out the maintainability issues of the software is refactoring. Due to lack of proper design approach, the code often inherits some bad smells which may lead to improper functioning of the code, especially when it is subject to change and requires some maintenance. A lot of studies have been performed to optimize the refactoring strategy which is also a very expensive process. In this paper, a component based system is considered, and a Fuzzy Multi Criteria Decision Making (FMCDM model is proposed by combining subjective and objective weights to rank the components as per their urgency of refactoring. Jdeodorant tool is used to detect the code smells from the individual components of a software system. The objective method uses the Entropy approach to rank the component having the code smell. The subjective method uses the Fuzzy TOPSIS approach based on decision makers’ judgement, to identify the critically and dependency of these code smells on the overall software. The suggested approach is implemented on component-based software having 15 components. The constitute components are ranked based on refactoring requirements.

  14. Archive Design Based on Planets Inspired Logical Object Model

    DEFF Research Database (Denmark)

    Zierau, Eld; Johansen, Anders

    2008-01-01

    We describe a proposal for a logical data model based on preliminary work the Planets project In OAIS terms the main areas discussed are related to the introduction of a logical data model for representing the past, present and future versions of the digital object associated with the Archival St...... Storage Package for the publications deposited by our client repositories....

  15. Authoring Systems Delivering Reusable Learning Objects

    Directory of Open Access Journals (Sweden)

    George Nicola Sammour

    2009-10-01

    Full Text Available A three layer e-learning course development model has been defined based on a conceptual model of learning content object. It starts by decomposing the learning content into small chunks which are initially placed in a hierarchic structure of units and blocks. The raw content components, being the atomic learning objects (ALO, were linked to the blocks and are structured in the database. We set forward a dynamic generation of LO's using re-usable e-learning raw materials or ALO’s In that view we need a LO authoring/ assembling system fitting the requirements of interoperability and reusability and starting from selecting the raw learning content from the learning materials content database. In practice authoring systems are used to develop e-learning courses. The company EDUWEST has developed an authoring system that is database based and will be SCORM compliant in the near future.

  16. New component-based normalization method to correct PET system models

    International Nuclear Information System (INIS)

    Kinouchi, Shoko; Miyoshi, Yuji; Suga, Mikio; Yamaya, Taiga; Yoshida, Eiji; Nishikido, Fumihiko; Tashima, Hideaki

    2011-01-01

    Normalization correction is necessary to obtain high-quality reconstructed images in positron emission tomography (PET). There are two basic types of normalization methods: the direct method and component-based methods. The former method suffers from the problem that a huge count number in the blank scan data is required. Therefore, the latter methods have been proposed to obtain high statistical accuracy normalization coefficients with a small count number in the blank scan data. In iterative image reconstruction methods, on the other hand, the quality of the obtained reconstructed images depends on the system modeling accuracy. Therefore, the normalization weighing approach, in which normalization coefficients are directly applied to the system matrix instead of a sinogram, has been proposed. In this paper, we propose a new component-based normalization method to correct system model accuracy. In the proposed method, two components are defined and are calculated iteratively in such a way as to minimize errors of system modeling. To compare the proposed method and the direct method, we applied both methods to our small OpenPET prototype system. We achieved acceptable statistical accuracy of normalization coefficients while reducing the count number of the blank scan data to one-fortieth that required in the direct method. (author)

  17. Tool Support for Collaborative Teaching and Learning of Object-Oriented Modelling

    DEFF Research Database (Denmark)

    Hansen, Klaus Marius; Ratzer, Anne Vinter

    2002-01-01

    Modeling is central to doing and learning object-oriented development. We present a new tool, Ideogramic UML, for gesture-based collaborative modeling with the Unified Modeling Language (UML), which can be used to collaboratively teach and learn modeling. Furthermore, we discuss how we have...

  18. Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.

    Science.gov (United States)

    Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine

    2010-09-01

    Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.

  19. Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo

    KAUST Repository

    Martinez, Josue G.

    2010-06-01

    The authors consider the analysis of hierarchical longitudinal functional data based upon a functional principal components approach. In contrast to standard frequentist approaches to selecting the number of principal components, the authors do model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order to overcome this, the authors show how to apply Stochastic Approximation Monte Carlo (SAMC) to this problem, a method that has the potential to explore the entire space and does not become trapped in local extrema. The combination of reversible jump methods and SAMC in hierarchical longitudinal functional data is simplified by a polar coordinate representation of the principal components. The approach is easy to implement and does well in simulated data in determining the distribution of the number of principal components, and in terms of its frequentist estimation properties. Empirical applications are also presented.

  20. Modelling raster-based monthly water balance components for Europe

    Energy Technology Data Exchange (ETDEWEB)

    Ulmen, C.

    2000-11-01

    The terrestrial runoff component is a comparatively small but sensitive and thus significant quantity in the global energy and water cycle at the interface between landmass and atmosphere. As opposed to soil moisture and evapotranspiration which critically determine water vapour fluxes and thus water and energy transport, it can be measured as an integrated quantity over a large area, i.e. the river basin. This peculiarity makes terrestrial runoff ideally suited for the calibration, verification and validation of general circulation models (GCMs). Gauging stations are not homogeneously distributed in space. Moreover, time series are not necessarily continuously measured nor do they in general have overlapping time periods. To overcome this problems with regard to regular grid spacing used in GCMs, different methods can be applied to transform irregular data to regular so called gridded runoff fields. The present work aims to directly compute the gridded components of the monthly water balance (including gridded runoff fields) for Europe by application of the well-established raster-based macro-scale water balance model WABIMON used at the Federal Institute of Hydrology, Germany. Model calibration and validation is performed by separated examination of 29 representative European catchments. Results indicate a general applicability of the model delivering reliable overall patterns and integrated quantities on a monthly basis. For time steps less then too weeks further research and structural improvements of the model are suggested. (orig.)

  1. Modeling guidance and recognition in categorical search: bridging human and computer object detection.

    Science.gov (United States)

    Zelinsky, Gregory J; Peng, Yifan; Berg, Alexander C; Samaras, Dimitris

    2013-10-08

    Search is commonly described as a repeating cycle of guidance to target-like objects, followed by the recognition of these objects as targets or distractors. Are these indeed separate processes using different visual features? We addressed this question by comparing observer behavior to that of support vector machine (SVM) models trained on guidance and recognition tasks. Observers searched for a categorically defined teddy bear target in four-object arrays. Target-absent trials consisted of random category distractors rated in their visual similarity to teddy bears. Guidance, quantified as first-fixated objects during search, was strongest for targets, followed by target-similar, medium-similarity, and target-dissimilar distractors. False positive errors to first-fixated distractors also decreased with increasing dissimilarity to the target category. To model guidance, nine teddy bear detectors, using features ranging in biological plausibility, were trained on unblurred bears then tested on blurred versions of the same objects appearing in each search display. Guidance estimates were based on target probabilities obtained from these detectors. To model recognition, nine bear/nonbear classifiers, trained and tested on unblurred objects, were used to classify the object that would be fixated first (based on the detector estimates) as a teddy bear or a distractor. Patterns of categorical guidance and recognition accuracy were modeled almost perfectly by an HMAX model in combination with a color histogram feature. We conclude that guidance and recognition in the context of search are not separate processes mediated by different features, and that what the literature knows as guidance is really recognition performed on blurred objects viewed in the visual periphery.

  2. Logistic Model to Support Service Modularity for the Promotion of Reusability in a Web Objects-Enabled IoT Environment.

    Science.gov (United States)

    Kibria, Muhammad Golam; Ali, Sajjad; Jarwar, Muhammad Aslam; Kumar, Sunil; Chong, Ilyoung

    2017-09-22

    Due to a very large number of connected virtual objects in the surrounding environment, intelligent service features in the Internet of Things requires the reuse of existing virtual objects and composite virtual objects. If a new virtual object is created for each new service request, then the number of virtual object would increase exponentially. The Web of Objects applies the principle of service modularity in terms of virtual objects and composite virtual objects. Service modularity is a key concept in the Web Objects-Enabled Internet of Things (IoT) environment which allows for the reuse of existing virtual objects and composite virtual objects in heterogeneous ontologies. In the case of similar service requests occurring at the same, or different locations, the already-instantiated virtual objects and their composites that exist in the same, or different ontologies can be reused. In this case, similar types of virtual objects and composite virtual objects are searched and matched. Their reuse avoids duplication under similar circumstances, and reduces the time it takes to search and instantiate them from their repositories, where similar functionalities are provided by similar types of virtual objects and their composites. Controlling and maintaining a virtual object means controlling and maintaining a real-world object in the real world. Even though the functional costs of virtual objects are just a fraction of those for deploying and maintaining real-world objects, this article focuses on reusing virtual objects and composite virtual objects, as well as discusses similarity matching of virtual objects and composite virtual objects. This article proposes a logistic model that supports service modularity for the promotion of reusability in the Web Objects-enabled IoT environment. Necessary functional components and a flowchart of an algorithm for reusing composite virtual objects are discussed. Also, to realize the service modularity, a use case scenario is

  3. A multi-objective constraint-based approach for modeling genome-scale microbial ecosystems.

    Science.gov (United States)

    Budinich, Marko; Bourdon, Jérémie; Larhlimi, Abdelhalim; Eveillard, Damien

    2017-01-01

    Interplay within microbial communities impacts ecosystems on several scales, and elucidation of the consequent effects is a difficult task in ecology. In particular, the integration of genome-scale data within quantitative models of microbial ecosystems remains elusive. This study advocates the use of constraint-based modeling to build predictive models from recent high-resolution -omics datasets. Following recent studies that have demonstrated the accuracy of constraint-based models (CBMs) for simulating single-strain metabolic networks, we sought to study microbial ecosystems as a combination of single-strain metabolic networks that exchange nutrients. This study presents two multi-objective extensions of CBMs for modeling communities: multi-objective flux balance analysis (MO-FBA) and multi-objective flux variability analysis (MO-FVA). Both methods were applied to a hot spring mat model ecosystem. As a result, multiple trade-offs between nutrients and growth rates, as well as thermodynamically favorable relative abundances at community level, were emphasized. We expect this approach to be used for integrating genomic information in microbial ecosystems. Following models will provide insights about behaviors (including diversity) that take place at the ecosystem scale.

  4. Structural assessment of aerospace components using image processing algorithms and Finite Element models

    DEFF Research Database (Denmark)

    Stamatelos, Dimtrios; Kappatos, Vassilios

    2017-01-01

    Purpose – This paper presents the development of an advanced structural assessment approach for aerospace components (metallic and composites). This work focuses on developing an automatic image processing methodology based on Non Destructive Testing (NDT) data and numerical models, for predicting...... the residual strength of these components. Design/methodology/approach – An image processing algorithm, based on the threshold method, has been developed to process and quantify the geometric characteristics of damages. Then, a parametric Finite Element (FE) model of the damaged component is developed based...... on the inputs acquired from the image processing algorithm. The analysis of the metallic structures is employing the Extended FE Method (XFEM), while for the composite structures the Cohesive Zone Model (CZM) technique with Progressive Damage Modelling (PDM) is used. Findings – The numerical analyses...

  5. A multi-component and multi-failure mode inspection model based on the delay time concept

    International Nuclear Information System (INIS)

    Wang Wenbin; Banjevic, Dragan; Pecht, Michael

    2010-01-01

    The delay time concept and the techniques developed for modelling and optimising plant inspection practices have been reported in many papers and case studies. For a system comprised of many components and subject to many different failure modes, one of the most convenient ways to model the inspection and failure processes is to use a stochastic point process for defect arrivals and a common delay time distribution for the duration between defect the arrival and failure of all defects. This is an approximation, but has been proven to be valid when the number of components is large. However, for a system with just a few key components and subject to few major failure modes, the approximation may be poor. In this paper, a model is developed to address this situation, where each component and failure mode is modelled individually and then pooled together to form the system inspection model. Since inspections are usually scheduled for the whole system rather than individual components, we then formulate the inspection model when the time to the next inspection from the point of a component failure renewal is random. This imposes some complication to the model, and an asymptotic solution was found. Simulation algorithms have also been proposed as a comparison to the analytical results. A numerical example is presented to demonstrate the model.

  6. Evaluation of the RELAP5/MOD3 multidimensional component model

    International Nuclear Information System (INIS)

    Tomlinson, E.T.; Rens, T.E.; Coffield, R.D.

    1994-01-01

    Accurate plenum predictions, which are directly related to the mixing models used, are an important plant modeling consideration because of the consequential impact on basic transient performance calculations for the integrated system. The effect of plenum is a time shift between inlet and outlet temperature changes to the particular volume. Perfect mixing, where the total volume interacts instantaneously with the total inlet flow, does not occur because of effects such as inlet/outlet nozzle jetting, flow stratification, nested vortices within the volume and the general three-dimensional velocity distribution of the flow field. The time lag which exists between the inlet and outlet flows impacts the predicted rate of temperature change experienced by various plant system components and this impacts local component analyses which are affected by the rate of temperature change. This study includes a comparison of two-dimensional plenum mixing predictions using CFD-FLOW3D, RELAP5/MOD3 and perfect mixing models. Three different geometries (flat, square and tall) are assessed for scalar transport times using a wide range of inlet velocity and isothermal conditions. In addition, the three geometries were evaluated for low flow conditions with the inlet flow experiencing a large step temperature decrease. A major conclusion from this study is that the RELAP5/MOD3 multidimensional component model appears to be adequately predicting plenum mixing for a wide range of thermal-hydraulic conditions representative of plant transients

  7. A Model of Socially Connected Web Objects for IoT Applications

    Directory of Open Access Journals (Sweden)

    Sajjad Ali

    2018-01-01

    Full Text Available The Internet of Things (IoT is evolving with the connected objects at an unprecedented rate, bringing about enormous opportunities for the future IoT applications as well as challenges. One of the major challenges is to handle the complexity generated by the interconnection of billions of objects. However, Social Internet of Things (SIoT, emerging from the conglomeration of IoT and social networks, has realized an efficient way to facilitate the development of complex future IoT applications. Nevertheless, to fully utilize the benefits of SIoT, a platform that can provide efficient services using social relations among heterogeneous objects is highly required. The web objects enabled IoT environment promotes SIoT features by enabling virtualization using virtual objects and supporting the modularity with microservices. To realize SIoT services, this article proposes an architecture that provides a foundation for the development of lightweight microservices based on socially connected web objects. To efficiently discover web objects and reduce the complexity of service provisioning processes, a social relationship model is presented. To realize the interoperable service operations, a semantic ontology model has been developed. Finally, to evaluate the proposed design, a prototype has been implemented based on a use case scenario.

  8. An object-oriented language-database integration model: The composition filters approach

    NARCIS (Netherlands)

    Aksit, Mehmet; Bergmans, Lodewijk; Vural, Sinan; Vural, S.

    1991-01-01

    This paper introduces a new model, based on so-called object-composition filters, that uniformly integrates database-like features into an object-oriented language. The focus is on providing persistent dynamic data structures, data sharing, transactions, multiple views and associative access,

  9. An Object-Oriented Language-Database Integration Model: The Composition-Filters Approach

    NARCIS (Netherlands)

    Aksit, Mehmet; Bergmans, Lodewijk; Vural, S.; Vural, Sinan; Lehrmann Madsen, O.

    1992-01-01

    This paper introduces a new model, based on so-called object-composition filters, that uniformly integrates database-like features into an object-oriented language. The focus is on providing persistent dynamic data structures, data sharing, transactions, multiple views and associative access,

  10. Modeling cellular networks in fading environments with dominant specular components

    KAUST Repository

    AlAmmouri, Ahmad

    2016-07-26

    Stochastic geometry (SG) has been widely accepted as a fundamental tool for modeling and analyzing cellular networks. However, the fading models used with SG analysis are mainly confined to the simplistic Rayleigh fading, which is extended to the Nakagami-m fading in some special cases. However, neither the Rayleigh nor the Nakagami-m accounts for dominant specular components (DSCs) which may appear in realistic fading channels. In this paper, we present a tractable model for cellular networks with generalized two-ray (GTR) fading channel. The GTR fading explicitly accounts for two DSCs in addition to the diffuse components and offers high flexibility to capture diverse fading channels that appear in realistic outdoor/indoor wireless communication scenarios. It also encompasses the famous Rayleigh and Rician fading as special cases. To this end, the prominent effect of DSCs is highlighted in terms of average spectral efficiency. © 2016 IEEE.

  11. Finsler Geometry Modeling of Phase Separation in Multi-Component Membranes

    Directory of Open Access Journals (Sweden)

    Satoshi Usui

    2016-08-01

    Full Text Available A Finsler geometric surface model is studied as a coarse-grained model for membranes of three components, such as zwitterionic phospholipid (DOPC, lipid (DPPC and an organic molecule (cholesterol. To understand the phase separation of liquid-ordered (DPPC rich L o and liquid-disordered (DOPC rich L d , we introduce a binary variable σ ( = ± 1 into the triangulated surface model. We numerically determine that two circular and stripe domains appear on the surface. The dependence of the morphological change on the area fraction of L o is consistent with existing experimental results. This provides us with a clear understanding of the origin of the line tension energy, which has been used to understand these morphological changes in three-component membranes. In addition to these two circular and stripe domains, a raft-like domain and budding domain are also observed, and the several corresponding phase diagrams are obtained.

  12. The Component Slope Linear Model for Calculating Intensive Partial Molar Properties: Application to Waste Glasses

    International Nuclear Information System (INIS)

    Reynolds, Jacob G.

    2013-01-01

    Partial molar properties are the changes occurring when the fraction of one component is varied while the fractions of all other component mole fractions change proportionally. They have many practical and theoretical applications in chemical thermodynamics. Partial molar properties of chemical mixtures are difficult to measure because the component mole fractions must sum to one, so a change in fraction of one component must be offset with a change in one or more other components. Given that more than one component fraction is changing at a time, it is difficult to assign a change in measured response to a change in a single component. In this study, the Component Slope Linear Model (CSLM), a model previously published in the statistics literature, is shown to have coefficients that correspond to the intensive partial molar properties. If a measured property is plotted against the mole fraction of a component while keeping the proportions of all other components constant, the slope at any given point on a graph of this curve is the partial molar property for that constituent. Actually plotting this graph has been used to determine partial molar properties for many years. The CSLM directly includes this slope in a model that predicts properties as a function of the component mole fractions. This model is demonstrated by applying it to the constant pressure heat capacity data from the NaOH-NaAl(OH 4 H 2 O system, a system that simplifies Hanford nuclear waste. The partial molar properties of H 2 O, NaOH, and NaAl(OH) 4 are determined. The equivalence of the CSLM and the graphical method is verified by comparing results detennined by the two methods. The CSLM model has been previously used to predict the liquidus temperature of spinel crystals precipitated from Hanford waste glass. Those model coefficients are re-interpreted here as the partial molar spinel liquidus temperature of the glass components

  13. Improved object optimal synthetic description, modeling, learning, and discrimination by GEOGINE computational kernel

    Science.gov (United States)

    Fiorini, Rodolfo A.; Dacquino, Gianfranco

    2005-03-01

    GEOGINE (GEOmetrical enGINE), a state-of-the-art OMG (Ontological Model Generator) based on n-D Tensor Invariants for n-Dimensional shape/texture optimal synthetic representation, description and learning, was presented in previous conferences elsewhere recently. Improved computational algorithms based on the computational invariant theory of finite groups in Euclidean space and a demo application is presented. Progressive model automatic generation is discussed. GEOGINE can be used as an efficient computational kernel for fast reliable application development and delivery in advanced biomedical engineering, biometric, intelligent computing, target recognition, content image retrieval, data mining technological areas mainly. Ontology can be regarded as a logical theory accounting for the intended meaning of a formal dictionary, i.e., its ontological commitment to a particular conceptualization of the world object. According to this approach, "n-D Tensor Calculus" can be considered a "Formal Language" to reliably compute optimized "n-Dimensional Tensor Invariants" as specific object "invariant parameter and attribute words" for automated n-Dimensional shape/texture optimal synthetic object description by incremental model generation. The class of those "invariant parameter and attribute words" can be thought as a specific "Formal Vocabulary" learned from a "Generalized Formal Dictionary" of the "Computational Tensor Invariants" language. Even object chromatic attributes can be effectively and reliably computed from object geometric parameters into robust colour shape invariant characteristics. As a matter of fact, any highly sophisticated application needing effective, robust object geometric/colour invariant attribute capture and parameterization features, for reliable automated object learning and discrimination can deeply benefit from GEOGINE progressive automated model generation computational kernel performance. Main operational advantages over previous

  14. Building Component Library: An Online Repository to Facilitate Building Energy Model Creation; Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Fleming, K.; Long, N.; Swindler, A.

    2012-05-01

    This paper describes the Building Component Library (BCL), the U.S. Department of Energy's (DOE) online repository of building components that can be directly used to create energy models. This comprehensive, searchable library consists of components and measures as well as the metadata which describes them. The library is also designed to allow contributors to easily add new components, providing a continuously growing, standardized list of components for users to draw upon.

  15. Bourbaki's structure theory in the problem of complex systems simulation models synthesis and model-oriented programming

    Science.gov (United States)

    Brodsky, Yu. I.

    2015-01-01

    The work is devoted to the application of Bourbaki's structure theory to substantiate the synthesis of simulation models of complex multicomponent systems, where every component may be a complex system itself. An application of the Bourbaki's structure theory offers a new approach to the design and computer implementation of simulation models of complex multicomponent systems—model synthesis and model-oriented programming. It differs from the traditional object-oriented approach. The central concept of this new approach and at the same time, the basic building block for the construction of more complex structures is the concept of models-components. A model-component endowed with a more complicated structure than, for example, the object in the object-oriented analysis. This structure provides to the model-component an independent behavior-the ability of standard responds to standard requests of its internal and external environment. At the same time, the computer implementation of model-component's behavior is invariant under the integration of models-components into complexes. This fact allows one firstly to construct fractal models of any complexity, and secondly to implement a computational process of such constructions uniformly-by a single universal program. In addition, the proposed paradigm allows one to exclude imperative programming and to generate computer code with a high degree of parallelism.

  16. Developing a Novel Multi-objective Programming Model for Personnel Assignment Problem

    Directory of Open Access Journals (Sweden)

    Mehdi Seifbarghy

    2014-05-01

    Full Text Available The assignment of personnel to the right positions in order to increase organization's performance is one of the most crucial tasks in human resource management. In this paper, personnel assignment problem is formulated as a multi-objective binary integer programming model in which skills, level of satisfaction and training cost of personnel are considered simultaneously in productive company. The purpose of this model is to obtain the best matching between candidates and positions. In this model, a set of methods such as a group analytic hierarchy process (GAHP, Shannon entropy, coefficient of variation (CV and fuzzy logic are used to calculate the weights of evaluation criteria, weights of position and coefficient of objective functions. This proposed model can rationalize the subjective judgments of decision makers with mathematic models.

  17. Steady-State Plant Model to Predict Hydroden Levels in Power Plant Components

    Energy Technology Data Exchange (ETDEWEB)

    Glatzmaier, Greg C.; Cable, Robert; Newmarker, Marc

    2017-06-27

    The National Renewable Energy Laboratory (NREL) and Acciona Energy North America developed a full-plant steady-state computational model that estimates levels of hydrogen in parabolic trough power plant components. The model estimated dissolved hydrogen concentrations in the circulating heat transfer fluid (HTF), and corresponding partial pressures within each component. Additionally for collector field receivers, the model estimated hydrogen pressure in the receiver annuli. The model was developed to estimate long-term equilibrium hydrogen levels in power plant components, and to predict the benefit of hydrogen mitigation strategies for commercial power plants. Specifically, the model predicted reductions in hydrogen levels within the circulating HTF that result from purging hydrogen from the power plant expansion tanks at a specified target rate. Our model predicted hydrogen partial pressures from 8.3 mbar to 9.6 mbar in the power plant components when no mitigation treatment was employed at the expansion tanks. Hydrogen pressures in the receiver annuli were 8.3 to 8.4 mbar. When hydrogen partial pressure was reduced to 0.001 mbar in the expansion tanks, hydrogen pressures in the receiver annuli fell to a range of 0.001 mbar to 0.02 mbar. When hydrogen partial pressure was reduced to 0.3 mbar in the expansion tanks, hydrogen pressures in the receiver annuli fell to a range of 0.25 mbar to 0.28 mbar. Our results show that controlling hydrogen partial pressure in the expansion tanks allows us to reduce and maintain hydrogen pressures in the receiver annuli to any practical level.

  18. Geospatial Database for Strata Objects Based on Land Administration Domain Model (ladm)

    Science.gov (United States)

    Nasorudin, N. N.; Hassan, M. I.; Zulkifli, N. A.; Rahman, A. Abdul

    2016-09-01

    Recently in our country, the construction of buildings become more complex and it seems that strata objects database becomes more important in registering the real world as people now own and use multilevel of spaces. Furthermore, strata title was increasingly important and need to be well-managed. LADM is a standard model for land administration and it allows integrated 2D and 3D representation of spatial units. LADM also known as ISO 19152. The aim of this paper is to develop a strata objects database using LADM. This paper discusses the current 2D geospatial database and needs for 3D geospatial database in future. This paper also attempts to develop a strata objects database using a standard data model (LADM) and to analyze the developed strata objects database using LADM data model. The current cadastre system in Malaysia includes the strata title is discussed in this paper. The problems in the 2D geospatial database were listed and the needs for 3D geospatial database in future also is discussed. The processes to design a strata objects database are conceptual, logical and physical database design. The strata objects database will allow us to find the information on both non-spatial and spatial strata title information thus shows the location of the strata unit. This development of strata objects database may help to handle the strata title and information.

  19. Objectives for next generation of practical short-range atmospheric dispersion models

    International Nuclear Information System (INIS)

    Olesen, H.R.; Mikkelsen, T.

    1992-01-01

    The proceedings contains papers from the workshop ''Objectives for Next Generation of Practical Short-Range Atmospheric Dispersion Models''. They deal with two types of models, namely models for regulatory purposes and models for real-time applications. The workshop was the result of an action started in 1991 for increased cooperation and harmonization within atmospheric dispersion modelling. The focus of the workshop was on the management of model development and the definition of model objectives, rather than on detailed model contents. It was the intention to identify actions that can be taken in order to improve the development and use of atmospheric dispersion models. The papers in the proceedings deal with various topics within the broad spectrum of matters related to up-to-date practical models, such as their scientific basis, requirements for model input and output, meteorological preprocessing, standardisation within modelling, electronic information exchange as a potentially useful tool, model evaluation and data bases for model evaluation. In addition to the papers, the proceedings contain summaries of the discussions at the workshop. These summaries point to a number of recommended actions which can be taken in order to improve ''modelling culture''. (AB)

  20. Neural networks engaged in tactile object manipulation: patterns of expression among healthy individuals

    Directory of Open Access Journals (Sweden)

    Seitz Rüdiger J

    2010-11-01

    Full Text Available Abstract Background Somatosensory object discrimination has been shown to involve widespread cortical and subcortical structures in both cerebral hemispheres. In this study we aimed to identify the networks involved in tactile object manipulation by principal component analysis (PCA of individual subjects. We expected to find more than one network. Methods Seven healthy right-handed male volunteers (aged 22 to 44 yrs manipulated with their right hand aluminium spheres during 5 s with a repetition frequency of 0.5-0.7 Hz. The correlation coefficients between the principal component temporal expression coefficients and the hemodynamic response modelled by SPM (ecc determined the task-related components. To establish reproducibility within subjects and similarity of functional connectivity patterns among subjects, regional correlation coefficients (rcc were computed between task-related component image volumes. By hierarchically categorizing, selecting and averaging the task-related component image volumes across subjects according to the rccs, mean component images (MCIs were derived describing neural networks associated with tactile object manipulation. Results Two independent mean component images emerged. Each included the primary sensorimotor cortex contralateral to the manipulating hand. The region extended to the premotor cortex in MCI 1, whereas it was restricted to the hand area of the primary sensorimotor cortex in MCI 2. MCI 1 showed bilateral involvement of the paralimbic anterior cingulate cortex (ACC, whereas MCI 2 implicated the midline thalamic nuclei and two areas of the rostral dorsal pons. Conclusions Two distinct networks participate in tactile object manipulation as revealed by the intra- and interindividual comparison of individual scans. Both were employed by most subjects, suggesting that both are involved in normal somatosensory object discrimination.

  1. An Extensible Component-Based Multi-Objective Evolutionary Algorithm Framework

    DEFF Research Database (Denmark)

    Sørensen, Jan Corfixen; Jørgensen, Bo Nørregaard

    2017-01-01

    The ability to easily modify the problem definition is currently missing in Multi-Objective Evolutionary Algorithms (MOEA). Existing MOEA frameworks do not support dynamic addition and extension of the problem formulation. The existing frameworks require a re-specification of the problem definition...

  2. Mathematical Model for Multicomponent Adsorption Equilibria Using Only Pure Component Data

    DEFF Research Database (Denmark)

    Marcussen, Lis

    2000-01-01

    A mathematical model for nonideal adsorption equilibria in multicomponent mixtures is developed. It is applied with good results for pure substances and for prediction of strongly nonideal multicomponent equilibria using only pure component data. The model accounts for adsorbent...

  3. A Three-Component Model for Magnetization Transfer. Solution by Projection-Operator Technique, and Application to Cartilage

    Science.gov (United States)

    Adler, Ronald S.; Swanson, Scott D.; Yeung, Hong N.

    1996-01-01

    A projection-operator technique is applied to a general three-component model for magnetization transfer, extending our previous two-component model [R. S. Adler and H. N. Yeung,J. Magn. Reson. A104,321 (1993), and H. N. Yeung, R. S. Adler, and S. D. Swanson,J. Magn. Reson. A106,37 (1994)]. The PO technique provides an elegant means of deriving a simple, effective rate equation in which there is natural separation of relaxation and source terms and allows incorporation of Redfield-Provotorov theory without any additional assumptions or restrictive conditions. The PO technique is extended to incorporate more general, multicomponent models. The three-component model is used to fit experimental data from samples of human hyaline cartilage and fibrocartilage. The fits of the three-component model are compared to the fits of the two-component model.

  4. Requirements-level semantics and model checking of object-oriented statecharts

    NARCIS (Netherlands)

    Eshuis, H.; Jansen, D.N.; Wieringa, Roelf J.

    2002-01-01

    In this paper we define a requirements-level execution semantics for object-oriented statecharts and show how properties of a system specified by these statecharts can be model checked using tool support for model checkers. Our execution semantics is requirements-level because it uses the perfect

  5. Partitioning detectability components in populations subject to within-season temporary emigration using binomial mixture models.

    Science.gov (United States)

    O'Donnell, Katherine M; Thompson, Frank R; Semlitsch, Raymond D

    2015-01-01

    Detectability of individual animals is highly variable and nearly always binomial mixture models to account for multiple sources of variation in detectability. The state process of the hierarchical model describes ecological mechanisms that generate spatial and temporal patterns in abundance, while the observation model accounts for the imperfect nature of counting individuals due to temporary emigration and false absences. We illustrate our model's potential advantages, including the allowance of temporary emigration between sampling periods, with a case study of southern red-backed salamanders Plethodon serratus. We fit our model and a standard binomial mixture model to counts of terrestrial salamanders surveyed at 40 sites during 3-5 surveys each spring and fall 2010-2012. Our models generated similar parameter estimates to standard binomial mixture models. Aspect was the best predictor of salamander abundance in our case study; abundance increased as aspect became more northeasterly. Increased time-since-rainfall strongly decreased salamander surface activity (i.e. availability for sampling), while higher amounts of woody cover objects and rocks increased conditional detection probability (i.e. probability of capture, given an animal is exposed to sampling). By explicitly accounting for both components of detectability, we increased congruence between our statistical modeling and our ecological understanding of the system. We stress the importance of choosing survey locations and protocols that maximize species availability and conditional detection probability to increase population parameter estimate reliability.

  6. Towards a Complete Model for Software Component Deployment on Heterogeneous Platform

    Directory of Open Access Journals (Sweden)

    Švogor Ivan

    2014-12-01

    Full Text Available This report briefly describes an ongoing research related to optimization of allocating software components to heterogeneous computing platform (which includes CPU, GPU and FPGA. Research goal is also presented, along with current hot topics of the research area, related research teams, and finally results and contribution of my research. It involves mathematical modelling which results in goal function, optimization method which finds a suboptimal solution to the goal function and a software modeling tool which enables graphical representation of the problem at hand and help developers determine component placement in the system design phase.

  7. The ORC method. Effective modelling of thermal performance of multilayer building components

    Energy Technology Data Exchange (ETDEWEB)

    Akander, Jan

    2000-02-01

    The ORC Method (Optimised RC-networks) provides a means of modelling one- or multidimensional heat transfer in building components, in this context within building simulation environments. The methodology is shown, primarily applied to heat transfer in multilayer building components. For multilayer building components, the analytical thermal performance is known, given layer thickness and material properties. The aim of the ORC Method is to optimise the values of the thermal resistances and heat capacities of an RC-model such as to give model performance a good agreement with the analytical performance, for a wide range of frequencies. The optimisation procedure is made in the frequency domain, where the over-all deviation between model and analytical frequency response, in terms of admittance and dynamic transmittance, is minimised. It is shown that ORC's are effective in terms of accuracy and computational time in comparison to finite difference models when used in building simulations, in this case with IDA/ICE. An ORC configuration of five mass nodes has been found to model building components in Nordic countries well, within the application of thermal comfort and energy requirement simulations. Simple RC-networks, such as the surface heat capacity and the simple R-C-configuration are not appropriate for detailed building simulation. However, these can be used as basis for defining the effective heat capacity of a building component. An approximate method is suggested on how to determine the effective heat capacity without the use of complex numbers. This entity can be calculated on basis of layer thickness and material properties with the help of two time constants. The approximate method can give inaccuracies corresponding to 20%. In-situ measurements have been carried out in an experimental building with the purpose of establishing the effective heat capacity of external building components that are subjected to normal thermal conditions. The auxiliary

  8. Learning-based stochastic object models for characterizing anatomical variations

    Science.gov (United States)

    Dolly, Steven R.; Lou, Yang; Anastasio, Mark A.; Li, Hua

    2018-03-01

    It is widely known that the optimization of imaging systems based on objective, task-based measures of image quality via computer-simulation requires the use of a stochastic object model (SOM). However, the development of computationally tractable SOMs that can accurately model the statistical variations in human anatomy within a specified ensemble of patients remains a challenging task. Previously reported numerical anatomic models lack the ability to accurately model inter-patient and inter-organ variations in human anatomy among a broad patient population, mainly because they are established on image data corresponding to a few of patients and individual anatomic organs. This may introduce phantom-specific bias into computer-simulation studies, where the study result is heavily dependent on which phantom is used. In certain applications, however, databases of high-quality volumetric images and organ contours are available that can facilitate this SOM development. In this work, a novel and tractable methodology for learning a SOM and generating numerical phantoms from a set of volumetric training images is developed. The proposed methodology learns geometric attribute distributions (GAD) of human anatomic organs from a broad patient population, which characterize both centroid relationships between neighboring organs and anatomic shape similarity of individual organs among patients. By randomly sampling the learned centroid and shape GADs with the constraints of the respective principal attribute variations learned from the training data, an ensemble of stochastic objects can be created. The randomness in organ shape and position reflects the learned variability of human anatomy. To demonstrate the methodology, a SOM of an adult male pelvis is computed and examples of corresponding numerical phantoms are created.

  9. Object-Oriented Modeling of an Energy Harvesting System Based on Thermoelectric Generators

    Science.gov (United States)

    Nesarajah, Marco; Frey, Georg

    This paper deals with the modeling of an energy harvesting system based on thermoelectric generators (TEG), and the validation of the model by means of a test bench. TEGs are capable to improve the overall energy efficiency of energy systems, e.g. combustion engines or heating systems, by using the remaining waste heat to generate electrical power. Previously, a component-oriented model of the TEG itself was developed in Modelica® language. With this model any TEG can be described and simulated given the material properties and the physical dimension. Now, this model was extended by the surrounding components to a complete model of a thermoelectric energy harvesting system. In addition to the TEG, the model contains the cooling system, the heat source, and the power electronics. To validate the simulation model, a test bench was built and installed on an oil-fired household heating system. The paper reports results of the measurements and discusses the validity of the developed simulation models. Furthermore, the efficiency of the proposed energy harvesting system is derived and possible improvements based on design variations tested in the simulation model are proposed.

  10. Flexible Multibody Systems Models Using Composite Materials Components

    International Nuclear Information System (INIS)

    Neto, Maria Augusta; Ambr'osio, Jorge A. C.; Leal, Rog'erio Pereira

    2004-01-01

    The use of a multibody methodology to describe the large motion of complex systems that experience structural deformations enables to represent the complete system motion, the relative kinematics between the components involved, the deformation of the structural members and the inertia coupling between the large rigid body motion and the system elastodynamics. In this work, the flexible multibody dynamics formulations of complex models are extended to include elastic components made of composite materials, which may be laminated and anisotropic. The deformation of any structural member must be elastic and linear, when described in a coordinate frame fixed to one or more material points of its domain, regardless of the complexity of its geometry. To achieve the proposed flexible multibody formulation, a finite element model for each flexible body is used. For the beam composite material elements, the sections properties are found using an asymptotic procedure that involves a two-dimensional finite element analysis of their cross-section. The equations of motion of the flexible multibody system are solved using an augmented Lagrangian formulation and the accelerations and velocities are integrated in time using a multi-step multi-order integration algorithm based on the Gear method

  11. A Model of Yeast Cell-Cycle Regulation Based on a Standard Component Modeling Strategy for Protein Regulatory Networks.

    Directory of Open Access Journals (Sweden)

    Teeraphan Laomettachit

    Full Text Available To understand the molecular mechanisms that regulate cell cycle progression in eukaryotes, a variety of mathematical modeling approaches have been employed, ranging from Boolean networks and differential equations to stochastic simulations. Each approach has its own characteristic strengths and weaknesses. In this paper, we propose a "standard component" modeling strategy that combines advantageous features of Boolean networks, differential equations and stochastic simulations in a framework that acknowledges the typical sorts of reactions found in protein regulatory networks. Applying this strategy to a comprehensive mechanism of the budding yeast cell cycle, we illustrate the potential value of standard component modeling. The deterministic version of our model reproduces the phenotypic properties of wild-type cells and of 125 mutant strains. The stochastic version of our model reproduces the cell-to-cell variability of wild-type cells and the partial viability of the CLB2-dbΔ clb5Δ mutant strain. Our simulations show that mathematical modeling with "standard components" can capture in quantitative detail many essential properties of cell cycle control in budding yeast.

  12. Interpolation on the manifold of K component GMMs.

    Science.gov (United States)

    Kim, Hyunwoo J; Adluru, Nagesh; Banerjee, Monami; Vemuri, Baba C; Singh, Vikas

    2015-12-01

    Probability density functions (PDFs) are fundamental objects in mathematics with numerous applications in computer vision, machine learning and medical imaging. The feasibility of basic operations such as computing the distance between two PDFs and estimating a mean of a set of PDFs is a direct function of the representation we choose to work with. In this paper, we study the Gaussian mixture model (GMM) representation of the PDFs motivated by its numerous attractive features. (1) GMMs are arguably more interpretable than, say, square root parameterizations (2) the model complexity can be explicitly controlled by the number of components and (3) they are already widely used in many applications. The main contributions of this paper are numerical algorithms to enable basic operations on such objects that strictly respect their underlying geometry. For instance, when operating with a set of K component GMMs, a first order expectation is that the result of simple operations like interpolation and averaging should provide an object that is also a K component GMM. The literature provides very little guidance on enforcing such requirements systematically. It turns out that these tasks are important internal modules for analysis and processing of a field of ensemble average propagators (EAPs), common in diffusion weighted magnetic resonance imaging. We provide proof of principle experiments showing how the proposed algorithms for interpolation can facilitate statistical analysis of such data, essential to many neuroimaging studies. Separately, we also derive interesting connections of our algorithm with functional spaces of Gaussians, that may be of independent interest.

  13. Multi-objective optimization of the reactor coolant system

    International Nuclear Information System (INIS)

    Chen Lei; Yan Changqi; Wang Jianjun

    2014-01-01

    Background: Weight and size are important criteria in evaluating the performance of a nuclear power plant. It is of great theoretical value and engineering significance to reduce the weight and volume of the components for a nuclear power plant by the optimization methodology. Purpose: In order to provide a new method for the optimization of nuclear power plant multi-objective, the concept of the non-dominated solution was introduced. Methods: Based on the parameters of Qinshan I nuclear power plant, the mathematical models of the reactor core, the reactor vessel, the main pipe, the pressurizer and the steam generator were built and verified. The sensitivity analyses were carried out to study the influences of the design variables on the objectives. A modified non-dominated sorting genetic algorithm was proposed and employed to optimize the weight and the volume of the reactor coolant system. Results: The results show that the component mathematical models are reliable, the modified non-dominated sorting generic algorithm is effective, and the reactor inlet temperature is the most important variable which influences the distribution of the non-dominated solutions. Conclusion: The optimization results could provide a reference to the design of such reactor coolant system. (authors)

  14. Multi-objective calibration of a reservoir model: aggregation and non-dominated sorting approaches

    Science.gov (United States)

    Huang, Y.

    2012-12-01

    Numerical reservoir models can be helpful tools for water resource management. These models are generally calibrated against historical measurement data made in reservoirs. In this study, two methods are proposed for the multi-objective calibration of such models: aggregation and non-dominated sorting methods. Both methods use a hybrid genetic algorithm as an optimization engine and are different in fitness assignment. In the aggregation method, a weighted sum of scaled simulation errors is designed as an overall objective function to measure the fitness of solutions (i.e. parameter values). The contribution of this study to the aggregation method is the correlation analysis and its implication to the choice of weight factors. In the non-dominated sorting method, a novel method based on non-dominated sorting and the method of minimal distance is used to calculate the dummy fitness of solutions. The proposed methods are illustrated using a water quality model that was set up to simulate the water quality of Pepacton Reservoir, which is located to the north of New York City and is used for water supply of city. The study also compares the aggregation and the non-dominated sorting methods. The purpose of this comparison is not to evaluate the pros and cons between the two methods but to determine whether the parameter values, objective function values (simulation errors) and simulated results obtained are significantly different with each other. The final results (objective function values) from the two methods are good compromise between all objective functions, and none of these results are the worst for any objective function. The calibrated model provides an overall good performance and the simulated results with the calibrated parameter values match the observed data better than the un-calibrated parameters, which supports and justifies the use of multi-objective calibration. The results achieved in this study can be very useful for the calibration of water

  15. Modelling temporal variance of component temperatures and directional anisotropy over vegetated canopy

    Science.gov (United States)

    Bian, Zunjian; du, yongming; li, hua

    2016-04-01

    Land surface temperature (LST) as a key variable plays an important role on hydrological, meteorology and climatological study. Thermal infrared directional anisotropy is one of essential factors to LST retrieval and application on longwave radiance estimation. Many approaches have been proposed to estimate directional brightness temperatures (DBT) over natural and urban surfaces. While less efforts focus on 3-D scene and the surface component temperatures used in DBT models are quiet difficult to acquire. Therefor a combined 3-D model of TRGM (Thermal-region Radiosity-Graphics combined Model) and energy balance method is proposed in the paper for the attempt of synchronously simulation of component temperatures and DBT in the row planted canopy. The surface thermodynamic equilibrium can be final determined by the iteration strategy of TRGM and energy balance method. The combined model was validated by the top-of-canopy DBTs using airborne observations. The results indicated that the proposed model performs well on the simulation of directional anisotropy, especially the hotspot effect. Though we find that the model overestimate the DBT with Bias of 1.2K, it can be an option as a data reference to study temporal variance of component temperatures and DBTs when field measurement is inaccessible

  16. Composition-Based Prediction of Temperature-Dependent Thermophysical Food Properties: Reevaluating Component Groups and Prediction Models.

    Science.gov (United States)

    Phinney, David Martin; Frelka, John C; Heldman, Dennis Ray

    2017-01-01

    Prediction of temperature-dependent thermophysical properties (thermal conductivity, density, specific heat, and thermal diffusivity) is an important component of process design for food manufacturing. Current models for prediction of thermophysical properties of foods are based on the composition, specifically, fat, carbohydrate, protein, fiber, water, and ash contents, all of which change with temperature. The objectives of this investigation were to reevaluate and improve the prediction expressions for thermophysical properties. Previously published data were analyzed over the temperature range from 10 to 150 °C. These data were analyzed to create a series of relationships between the thermophysical properties and temperature for each food component, as well as to identify the dependence of the thermophysical properties on more specific structural properties of the fats, carbohydrates, and proteins. Results from this investigation revealed that the relationships between the thermophysical properties of the major constituents of foods and temperature can be statistically described by linear expressions, in contrast to the current polynomial models. Links between variability in thermophysical properties and structural properties were observed. Relationships for several thermophysical properties based on more specific constituents have been identified. Distinctions between simple sugars (fructose, glucose, and lactose) and complex carbohydrates (starch, pectin, and cellulose) have been proposed. The relationships between the thermophysical properties and proteins revealed a potential correlation with the molecular weight of the protein. The significance of relating variability in constituent thermophysical properties with structural properties--such as molecular mass--could significantly improve composition-based prediction models and, consequently, the effectiveness of process design. © 2016 Institute of Food Technologists®.

  17. Multi-Model Estimation Based Moving Object Detection for Aerial Video

    Directory of Open Access Journals (Sweden)

    Yanning Zhang

    2015-04-01

    Full Text Available With the wide development of UAV (Unmanned Aerial Vehicle technology, moving target detection for aerial video has become a popular research topic in the computer field. Most of the existing methods are under the registration-detection framework and can only deal with simple background scenes. They tend to go wrong in the complex multi background scenarios, such as viaducts, buildings and trees. In this paper, we break through the single background constraint and perceive the complex scene accurately by automatic estimation of multiple background models. First, we segment the scene into several color blocks and estimate the dense optical flow. Then, we calculate an affine transformation model for each block with large area and merge the consistent models. Finally, we calculate subordinate degree to multi-background models pixel to pixel for all small area blocks. Moving objects are segmented by means of energy optimization method solved via Graph Cuts. The extensive experimental results on public aerial videos show that, due to multi background models estimation, analyzing each pixel’s subordinate relationship to multi models by energy minimization, our method can effectively remove buildings, trees and other false alarms and detect moving objects correctly.

  18. Tool Integration: Experiences and Issues in Using XMI and Component Technology

    DEFF Research Database (Denmark)

    Damm, Christian Heide; Hansen, Klaus Marius; Thomsen, Michael

    2000-01-01

    of conflicting data models, and provide architecture for doing so, based on component technology and XML Metadata Interchange. As an example, we discuss the implementation of an electronic whiteboard tool, Knight, which adds support for creative and collaborative object-oriented modeling to existing Computer-Aided...... Software Engineering through integration using our proposed architecture....

  19. A two-component dark matter model with real singlet scalars ...

    Indian Academy of Sciences (India)

    2016-01-05

    component dark matter model with real singlet scalars confronting GeV -ray excess from galactic centre and Fermi bubble. Debasish Majumdar Kamakshya Prasad Modak Subhendu Rakshit. Special: Cosmology Volume 86 Issue ...

  20. Effects of different components of Mao Dongqing's total flavonoids and total saponins on transient ischemic attack (TIA) model of rats.

    Science.gov (United States)

    Miao, Ming-San; Peng, Meng-Fan; Ma, Rui-Juan; Bai, Ming; Liu, Bao-Song

    2018-03-01

    Objective: To study the effects of the different components of the total flavonoids and total saponins from Mao Dongqing's active site on the rats of TIA model, determine the optimal reactive components ratio of Mao Dongqing on the rats of TIA. Methods: TIA rat model was induced by tail vein injection of tert butyl alcohol, the blank group was injected with the same amount of physiological saline, then behavioral score wasevaluated. Determination the level of glutamic acid in serum, the activity of Na+-K+-ATP enzyme, CA ++ -ATP enzyme and Mg ++ -ATP enzyme in Brain tissue, observe the changes of hippocampus in brain tissue, the comprehensive weight method was used to evaluate the efficacy of each component finally. Results: The contents of total flavonoids and total saponins in the active part of Mao Dongqing can significantly improve the pathological changes of brain tissue in rats, improve the activity of Na + -K + -ATP enzyme, Ca ++ -ATP enzyme and Mg ++ -ATP enzyme in the brain of rats, and reduce the level of glutamic acid in serum. The most significant of the contents was the ratio of 10:6. The different proportions of total flavonoids and total saponins in the active part of Mao Dongqing all has a better effect on the rats with TIA, and the ratio of 10:6 is the best active component for preventing and controlling TIA.

  1. SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model

    International Nuclear Information System (INIS)

    Zhou, Z; Folkert, M; Wang, J

    2016-01-01

    Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidential reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.

  2. SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Z; Folkert, M; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States)

    2016-06-15

    Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidential reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.

  3. Component and system simulation models for High Flux Isotope Reactor

    International Nuclear Information System (INIS)

    Sozer, A.

    1989-08-01

    Component models for the High Flux Isotope Reactor (HFIR) have been developed. The models are HFIR core, heat exchangers, pressurizer pumps, circulation pumps, letdown valves, primary head tank, generic transport delay (pipes), system pressure, loop pressure-flow balance, and decay heat. The models were written in FORTRAN and can be run on different computers, including IBM PCs, as they do not use any specific simulation languages such as ACSL or CSMP. 14 refs., 13 figs

  4. Multi-objective reliability redundancy allocation in an interval environment using particle swarm optimization

    International Nuclear Information System (INIS)

    Zhang, Enze; Chen, Qingwei

    2016-01-01

    Most of the existing works addressing reliability redundancy allocation problems are based on the assumption of fixed reliabilities of components. In real-life situations, however, the reliabilities of individual components may be imprecise, most often given as intervals, under different operating or environmental conditions. This paper deals with reliability redundancy allocation problems modeled in an interval environment. An interval multi-objective optimization problem is formulated from the original crisp one, where system reliability and cost are simultaneously considered. To render the multi-objective particle swarm optimization (MOPSO) algorithm capable of dealing with interval multi-objective optimization problems, a dominance relation for interval-valued functions is defined with the help of our newly proposed order relations of interval-valued numbers. Then, the crowding distance is extended to the multi-objective interval-valued case. Finally, the effectiveness of the proposed approach has been demonstrated through two numerical examples and a case study of supervisory control and data acquisition (SCADA) system in water resource management. - Highlights: • We model the reliability redundancy allocation problem in an interval environment. • We apply the particle swarm optimization directly on the interval values. • A dominance relation for interval-valued multi-objective functions is defined. • The crowding distance metric is extended to handle imprecise objective functions.

  5. A multi-objective constraint-based approach for modeling genome-scale microbial ecosystems.

    Directory of Open Access Journals (Sweden)

    Marko Budinich

    Full Text Available Interplay within microbial communities impacts ecosystems on several scales, and elucidation of the consequent effects is a difficult task in ecology. In particular, the integration of genome-scale data within quantitative models of microbial ecosystems remains elusive. This study advocates the use of constraint-based modeling to build predictive models from recent high-resolution -omics datasets. Following recent studies that have demonstrated the accuracy of constraint-based models (CBMs for simulating single-strain metabolic networks, we sought to study microbial ecosystems as a combination of single-strain metabolic networks that exchange nutrients. This study presents two multi-objective extensions of CBMs for modeling communities: multi-objective flux balance analysis (MO-FBA and multi-objective flux variability analysis (MO-FVA. Both methods were applied to a hot spring mat model ecosystem. As a result, multiple trade-offs between nutrients and growth rates, as well as thermodynamically favorable relative abundances at community level, were emphasized. We expect this approach to be used for integrating genomic information in microbial ecosystems. Following models will provide insights about behaviors (including diversity that take place at the ecosystem scale.

  6. Modeling and numerical simulation of multi-component flow in porous media

    International Nuclear Information System (INIS)

    Saad, B.

    2011-01-01

    This work deals with the modelization and numerical simulation of two phase multi-component flow in porous media. The study is divided into two parts. First we study and prove the mathematical existence in a weak sense of two degenerate parabolic systems modeling two phase (liquid and gas) two component (water and hydrogen) flow in porous media. In the first model, we assume that there is a local thermodynamic equilibrium between both phases of hydrogen by using the Henry's law. The second model consists of a relaxation of the previous model: the kinetic of the mass exchange between dissolved hydrogen and hydrogen in the gas phase is no longer instantaneous. The second part is devoted to the numerical analysis of those models. Firstly, we propose a numerical scheme to compare numerical solutions obtained with the first model and numerical solutions obtained with the second model where the characteristic time to recover the thermodynamic equilibrium goes to zero. Secondly, we present a finite volume scheme with a phase-by-phase upstream weighting scheme without simplified assumptions on the state law of gas densities. We also validate this scheme on a 2D test cases. (author)

  7. Software Engineering Environment for Component-based Design of Embedded Software

    DEFF Research Database (Denmark)

    Guo, Yu

    2010-01-01

    as well as application models in a computer-aided software engineering environment. Furthermore, component models have been realized following carefully developed design patterns, which provide for an efficient and reusable implementation. The components have been ultimately implemented as prefabricated...... executable objects that can be linked together into an executable application. The development of embedded software using the COMDES framework is supported by the associated integrated engineering environment consisting of a number of tools, which support basic functionalities, such as system modelling......, validation, and executable code generation for specific hardware platforms. Developing such an environment and the associated tools is a highly complex engineering task. Therefore, this thesis has investigated key design issues and analysed existing platforms supporting model-driven software development...

  8. Replaceable LMFBR core components

    International Nuclear Information System (INIS)

    Evans, E.A.; Cunningham, G.W.

    1976-01-01

    Much progress has been made in understanding material and component performance in the high temperature, fast neutron environment of the LMFBR. Current data have provided strong assurance that the initial core component lifetime objectives of FFTF and CRBR can be met. At the same time, this knowledge translates directly into the need for improved core designs that utilize improved materials and advanced fuels required to meet objectives of low doubling times and extended core component lifetimes. An industrial base for the manufacture of quality core components has been developed in the US, and all procurements for the first two core equivalents for FFTF will be completed this year. However, the problem of fabricating recycled plutonium while dramatically reducing fabrication costs, minimizing personnel exposure, and protecting public health and safety must be addressed

  9. USE OF IMAGE BASED MODELLING FOR DOCUMENTATION OF INTRICATELY SHAPED OBJECTS

    Directory of Open Access Journals (Sweden)

    M. Marčiš

    2016-06-01

    Full Text Available In the documentation of cultural heritage, we can encounter three dimensional shapes and structures which are complicated to measure. Such objects are for example spiral staircases, timber roof trusses, historical furniture or folk costume where it is nearly impossible to effectively use the traditional surveying or the terrestrial laser scanning due to the shape of the object, its dimensions and the crowded environment. The actual methods of digital photogrammetry can be very helpful in such cases with the emphasis on the automated processing of the extensive image data. The created high resolution 3D models and 2D orthophotos are very important for the documentation of architectural elements and they can serve as an ideal base for the vectorization and 2D drawing documentation. This contribution wants to describe the various usage of image based modelling in specific interior spaces and specific objects. The advantages and disadvantages of the photogrammetric measurement of such objects in comparison to other surveying methods are reviewed.

  10. Use of Image Based Modelling for Documentation of Intricately Shaped Objects

    Science.gov (United States)

    Marčiš, M.; Barták, P.; Valaška, D.; Fraštia, M.; Trhan, O.

    2016-06-01

    In the documentation of cultural heritage, we can encounter three dimensional shapes and structures which are complicated to measure. Such objects are for example spiral staircases, timber roof trusses, historical furniture or folk costume where it is nearly impossible to effectively use the traditional surveying or the terrestrial laser scanning due to the shape of the object, its dimensions and the crowded environment. The actual methods of digital photogrammetry can be very helpful in such cases with the emphasis on the automated processing of the extensive image data. The created high resolution 3D models and 2D orthophotos are very important for the documentation of architectural elements and they can serve as an ideal base for the vectorization and 2D drawing documentation. This contribution wants to describe the various usage of image based modelling in specific interior spaces and specific objects. The advantages and disadvantages of the photogrammetric measurement of such objects in comparison to other surveying methods are reviewed.

  11. Do Knowledge-Component Models Need to Incorporate Representational Competencies?

    Science.gov (United States)

    Rau, Martina Angela

    2017-01-01

    Traditional knowledge-component models describe students' content knowledge (e.g., their ability to carry out problem-solving procedures or their ability to reason about a concept). In many STEM domains, instruction uses multiple visual representations such as graphs, figures, and diagrams. The use of visual representations implies a…

  12. A multiple objective mixed integer linear programming model for power generation expansion planning

    Energy Technology Data Exchange (ETDEWEB)

    Antunes, C. Henggeler; Martins, A. Gomes [INESC-Coimbra, Coimbra (Portugal); Universidade de Coimbra, Dept. de Engenharia Electrotecnica, Coimbra (Portugal); Brito, Isabel Sofia [Instituto Politecnico de Beja, Escola Superior de Tecnologia e Gestao, Beja (Portugal)

    2004-03-01

    Power generation expansion planning inherently involves multiple, conflicting and incommensurate objectives. Therefore, mathematical models become more realistic if distinct evaluation aspects, such as cost and environmental concerns, are explicitly considered as objective functions rather than being encompassed by a single economic indicator. With the aid of multiple objective models, decision makers may grasp the conflicting nature and the trade-offs among the different objectives in order to select satisfactory compromise solutions. This paper presents a multiple objective mixed integer linear programming model for power generation expansion planning that allows the consideration of modular expansion capacity values of supply-side options. This characteristic of the model avoids the well-known problem associated with continuous capacity values that usually have to be discretized in a post-processing phase without feedback on the nature and importance of the changes in the attributes of the obtained solutions. Demand-side management (DSM) is also considered an option in the planning process, assuming there is a sufficiently large portion of the market under franchise conditions. As DSM full costs are accounted in the model, including lost revenues, it is possible to perform an evaluation of the rate impact in order to further inform the decision process (Author)

  13. Applications of an OO (Objected Oriented) methodology and case to a DAQ system

    International Nuclear Information System (INIS)

    Bee, C.P.; Eshghi, S.; Jones, R.

    1996-01-01

    The RD13 project has evaluated the use of the Object Oriented Information Engineering (OOIE) method during the development of several software components connected to the DAQ system. The method is supported by a sophisticated commercial CASE tool (Object Management Workbench) and programming environment (Kappa) which covers the full life-cycle of the software including model simulation, code generation and application deployment. This paper gives an overview of the method, CASE tool, DAD components which have been developed and we relate our experiences with the method and tool, its integration into our development environment and the spiral life cycle if supports. (author)

  14. Multi-Trait analysis of growth traits: fitting reduced rank models using principal components for Simmental beef cattle

    Directory of Open Access Journals (Sweden)

    Rodrigo Reis Mota

    2016-09-01

    Full Text Available ABSTRACT: The aim of this research was to evaluate the dimensional reduction of additive direct genetic covariance matrices in genetic evaluations of growth traits (range 100-730 days in Simmental cattle using principal components, as well as to estimate (covariance components and genetic parameters. Principal component analyses were conducted for five different models-one full and four reduced-rank models. Models were compared using Akaike information (AIC and Bayesian information (BIC criteria. Variance components and genetic parameters were estimated by restricted maximum likelihood (REML. The AIC and BIC values were similar among models. This indicated that parsimonious models could be used in genetic evaluations in Simmental cattle. The first principal component explained more than 96% of total variance in both models. Heritability estimates were higher for advanced ages and varied from 0.05 (100 days to 0.30 (730 days. Genetic correlation estimates were similar in both models regardless of magnitude and number of principal components. The first principal component was sufficient to explain almost all genetic variance. Furthermore, genetic parameter similarities and lower computational requirements allowed for parsimonious models in genetic evaluations of growth traits in Simmental cattle.

  15. Creation of integrated information model of 'Ukryttia' object premises condition to support the works

    International Nuclear Information System (INIS)

    Postil, S.D.; Ermolenko, A.I.; Ivanov, V.V.; Kotlyarov, V.T.

    2002-01-01

    A technology for creation of integrated information model of 'Ukryttia' Object premises conditions was developed on the basis of geoinformation system AutoCad. DB Access and instrumental utility 3D MAX. Information models and database for conditions of 'Ukryttia' object's premises located between 0.000 and 67.000 marks in axes 41-52, row G-T, were created. Using integrated information model of 'Ukryttia' object premises conditions, 3D surface distribution of radiation field in the object premises on level 0.000 has been received. It is revealed that maximum values of radiation field are concentrated over the clusters of fuel-containing materials

  16. Perceptual video quality assessment in H.264 video coding standard using objective modeling.

    Science.gov (United States)

    Karthikeyan, Ramasamy; Sainarayanan, Gopalakrishnan; Deepa, Subramaniam Nachimuthu

    2014-01-01

    Since usage of digital video is wide spread nowadays, quality considerations have become essential, and industry demand for video quality measurement is rising. This proposal provides a method of perceptual quality assessment in H.264 standard encoder using objective modeling. For this purpose, quality impairments are calculated and a model is developed to compute the perceptual video quality metric based on no reference method. Because of the shuttle difference between the original video and the encoded video the quality of the encoded picture gets degraded, this quality difference is introduced by the encoding process like Intra and Inter prediction. The proposed model takes into account of the artifacts introduced by these spatial and temporal activities in the hybrid block based coding methods and an objective modeling of these artifacts into subjective quality estimation is proposed. The proposed model calculates the objective quality metric using subjective impairments; blockiness, blur and jerkiness compared to the existing bitrate only calculation defined in the ITU G 1070 model. The accuracy of the proposed perceptual video quality metrics is compared against popular full reference objective methods as defined by VQEG.

  17. Multi-object segmentation framework using deformable models for medical imaging analysis.

    Science.gov (United States)

    Namías, Rafael; D'Amato, Juan Pablo; Del Fresno, Mariana; Vénere, Marcelo; Pirró, Nicola; Bellemare, Marc-Emmanuel

    2016-08-01

    Segmenting structures of interest in medical images is an important step in different tasks such as visualization, quantitative analysis, simulation, and image-guided surgery, among several other clinical applications. Numerous segmentation methods have been developed in the past three decades for extraction of anatomical or functional structures on medical imaging. Deformable models, which include the active contour models or snakes, are among the most popular methods for image segmentation combining several desirable features such as inherent connectivity and smoothness. Even though different approaches have been proposed and significant work has been dedicated to the improvement of such algorithms, there are still challenging research directions as the simultaneous extraction of multiple objects and the integration of individual techniques. This paper presents a novel open-source framework called deformable model array (DMA) for the segmentation of multiple and complex structures of interest in different imaging modalities. While most active contour algorithms can extract one region at a time, DMA allows integrating several deformable models to deal with multiple segmentation scenarios. Moreover, it is possible to consider any existing explicit deformable model formulation and even to incorporate new active contour methods, allowing to select a suitable combination in different conditions. The framework also introduces a control module that coordinates the cooperative evolution of the snakes and is able to solve interaction issues toward the segmentation goal. Thus, DMA can implement complex object and multi-object segmentations in both 2D and 3D using the contextual information derived from the model interaction. These are important features for several medical image analysis tasks in which different but related objects need to be simultaneously extracted. Experimental results on both computed tomography and magnetic resonance imaging show that the proposed

  18. Analyzing and designing object-oriented missile simulations with concurrency

    Science.gov (United States)

    Randorf, Jeffrey Allen

    2000-11-01

    A software object model for the six degree-of-freedom missile modeling domain is presented. As a precursor, a domain analysis of the missile modeling domain was started, based on the Feature-Oriented Domain Analysis (FODA) technique described by the Software Engineering Institute (SEI). It was subsequently determined the FODA methodology is functionally equivalent to the Object Modeling Technique. The analysis used legacy software documentation and code from the ENDOSIM, KDEC, and TFrames 6-DOF modeling tools, including other technical literature. The SEI Object Connection Architecture (OCA) was the template for designing the object model. Three variants of the OCA were considered---a reference structure, a recursive structure, and a reference structure with augmentation for flight vehicle modeling. The reference OCA design option was chosen for maintaining simplicity while not compromising the expressive power of the OMT model. The missile architecture was then analyzed for potential areas of concurrent computing. It was shown how protected objects could be used for data passing between OCA object managers, allowing concurrent access without changing the OCA reference design intent or structure. The implementation language was the 1995 release of Ada. OCA software components were shown how to be expressed as Ada child packages. While acceleration of several low level and other high operations level are possible on proper hardware, there was a 33% degradation of 4th order Runge-Kutta integrator performance of two simultaneous ordinary differential equations using Ada tasking on a single processor machine. The Defense Department's High Level Architecture was introduced and explained in context with the OCA. It was shown the HLA and OCA were not mutually exclusive architectures, but complimentary. HLA was shown as an interoperability solution, with the OCA as an architectural vehicle for software reuse. Further directions for implementing a 6-DOF missile modeling

  19. Reliability prediction system based on the failure rate model for electronic components

    International Nuclear Information System (INIS)

    Lee, Seung Woo; Lee, Hwa Ki

    2008-01-01

    Although many methodologies for predicting the reliability of electronic components have been developed, their reliability might be subjective according to a particular set of circumstances, and therefore it is not easy to quantify their reliability. Among the reliability prediction methods are the statistical analysis based method, the similarity analysis method based on an external failure rate database, and the method based on the physics-of-failure model. In this study, we developed a system by which the reliability of electronic components can be predicted by creating a system for the statistical analysis method of predicting reliability most easily. The failure rate models that were applied are MILHDBK- 217F N2, PRISM, and Telcordia (Bellcore), and these were compared with the general purpose system in order to validate the effectiveness of the developed system. Being able to predict the reliability of electronic components from the stage of design, the system that we have developed is expected to contribute to enhancing the reliability of electronic components

  20. Combining satellite data and appropriate objective functions for improved spatial pattern performance of a distributed hydrologic model

    Directory of Open Access Journals (Sweden)

    M. C. Demirel

    2018-02-01

    Full Text Available Satellite-based earth observations offer great opportunities to improve spatial model predictions by means of spatial-pattern-oriented model evaluations. In this study, observed spatial patterns of actual evapotranspiration (AET are utilised for spatial model calibration tailored to target the pattern performance of the model. The proposed calibration framework combines temporally aggregated observed spatial patterns with a new spatial performance metric and a flexible spatial parameterisation scheme. The mesoscale hydrologic model (mHM is used to simulate streamflow and AET and has been selected due to its soil parameter distribution approach based on pedo-transfer functions and the build in multi-scale parameter regionalisation. In addition two new spatial parameter distribution options have been incorporated in the model in order to increase the flexibility of root fraction coefficient and potential evapotranspiration correction parameterisations, based on soil type and vegetation density. These parameterisations are utilised as they are most relevant for simulated AET patterns from the hydrologic model. Due to the fundamental challenges encountered when evaluating spatial pattern performance using standard metrics, we developed a simple but highly discriminative spatial metric, i.e. one comprised of three easily interpretable components measuring co-location, variation and distribution of the spatial data. The study shows that with flexible spatial model parameterisation used in combination with the appropriate objective functions, the simulated spatial patterns of actual evapotranspiration become substantially more similar to the satellite-based estimates. Overall 26 parameters are identified for calibration through a sequential screening approach based on a combination of streamflow and spatial pattern metrics. The robustness of the calibrations is tested using an ensemble of nine calibrations based on different seed numbers using the

  1. Component-oriented approach to the development and use of numerical models in high energy physics

    International Nuclear Information System (INIS)

    Amelin, N.S.; Komogorov, M.Eh.

    2002-01-01

    We discuss the main concepts of a component approach to the development and use of numerical models in high energy physics. This approach is realized as the NiMax software system. The discussed concepts are illustrated by numerous examples of the system user session. In appendix chapter we describe physics and numerical algorithms of the model components to perform simulation of hadronic and nuclear collisions at high energies. These components are members of hadronic application modules that have been developed with the help of the NiMax system. Given report is served as an early release of the NiMax manual mainly for model component users

  2. Models for describing the thermal characteristics of building components

    DEFF Research Database (Denmark)

    Jimenez, M.J.; Madsen, Henrik

    2008-01-01

    , for example. For the analysis of these tests, dynamic analysis models and methods are required. However, a wide variety of models and methods exists, and the problem of choosing the most appropriate approach for each particular case is a non-trivial and interdisciplinary task. Knowledge of a large family....... The characteristics of each type of model are highlighted. Some available software tools for each of the methods described will be mentioned. A case study also demonstrating the difference between linear and nonlinear models is considered....... of these approaches may therefore be very useful for selecting a suitable approach for each particular case. This paper presents an overview of models that can be applied for modelling the thermal characteristics of buildings and building components using data from outdoor testing. The choice of approach depends...

  3. Genetic and Psychosocial Predictors of Aggression: Variable Selection and Model Building With Component-Wise Gradient Boosting

    Directory of Open Access Journals (Sweden)

    Robert Suchting

    2018-05-01

    Full Text Available Rationale: Given datasets with a large or diverse set of predictors of aggression, machine learning (ML provides efficient tools for identifying the most salient variables and building a parsimonious statistical model. ML techniques permit efficient exploration of data, have not been widely used in aggression research, and may have utility for those seeking prediction of aggressive behavior.Objectives: The present study examined predictors of aggression and constructed an optimized model using ML techniques. Predictors were derived from a dataset that included demographic, psychometric and genetic predictors, specifically FK506 binding protein 5 (FKBP5 polymorphisms, which have been shown to alter response to threatening stimuli, but have not been tested as predictors of aggressive behavior in adults.Methods: The data analysis approach utilized component-wise gradient boosting and model reduction via backward elimination to: (a select variables from an initial set of 20 to build a model of trait aggression; and then (b reduce that model to maximize parsimony and generalizability.Results: From a dataset of N = 47 participants, component-wise gradient boosting selected 8 of 20 possible predictors to model Buss-Perry Aggression Questionnaire (BPAQ total score, with R2 = 0.66. This model was simplified using backward elimination, retaining six predictors: smoking status, psychopathy (interpersonal manipulation and callous affect, childhood trauma (physical abuse and neglect, and the FKBP5_13 gene (rs1360780. The six-factor model approximated the initial eight-factor model at 99.4% of R2.Conclusions: Using an inductive data science approach, the gradient boosting model identified predictors consistent with previous experimental work in aggression; specifically psychopathy and trauma exposure. Additionally, allelic variants in FKBP5 were identified for the first time, but the relatively small sample size limits generality of results and calls for

  4. Single and multiple object tracking using log-euclidean Riemannian subspace and block-division appearance model.

    Science.gov (United States)

    Hu, Weiming; Li, Xi; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen; Zhang, Zhongfei

    2012-12-01

    Object appearance modeling is crucial for tracking objects, especially in videos captured by nonstationary cameras and for reasoning about occlusions between multiple moving objects. Based on the log-euclidean Riemannian metric on symmetric positive definite matrices, we propose an incremental log-euclidean Riemannian subspace learning algorithm in which covariance matrices of image features are mapped into a vector space with the log-euclidean Riemannian metric. Based on the subspace learning algorithm, we develop a log-euclidean block-division appearance model which captures both the global and local spatial layout information about object appearances. Single object tracking and multi-object tracking with occlusion reasoning are then achieved by particle filtering-based Bayesian state inference. During tracking, incremental updating of the log-euclidean block-division appearance model captures changes in object appearance. For multi-object tracking, the appearance models of the objects can be updated even in the presence of occlusions. Experimental results demonstrate that the proposed tracking algorithm obtains more accurate results than six state-of-the-art tracking algorithms.

  5. Aggregate meta-models for evolutionary multiobjective and many-objective optimization

    Czech Academy of Sciences Publication Activity Database

    Pilát, Martin; Neruda, Roman

    Roč. 116, 20 September (2013), s. 392-402 ISSN 0925-2312 R&D Projects: GA ČR GAP202/11/1368 Institutional support: RVO:67985807 Keywords : evolutionary algorithms * multiobjective optimization * many-objective optimization * surrogate models * meta-models * memetic algorithm Subject RIV: IN - Informatics, Computer Science Impact factor: 2.005, year: 2013

  6. Object as a model of intelligent robot in the virtual workspace

    Science.gov (United States)

    Foit, K.; Gwiazda, A.; Banas, W.; Sekala, A.; Hryniewicz, P.

    2015-11-01

    The contemporary industry requires that every element of a production line will fit into the global schema, which is connected with the global structure of business. There is the need to find the practical and effective ways of the design and management of the production process. The term “effective” should be understood in a manner that there exists a method, which allows building a system of nodes and relations in order to describe the role of the particular machine in the production process. Among all the machines involved in the manufacturing process, industrial robots are the most complex ones. This complexity is reflected in the realization of elaborated tasks, involving handling, transporting or orienting the objects in a work space, and even performing simple machining processes, such as deburring, grinding, painting, applying adhesives and sealants etc. The robot also performs some activities connected with automatic tool changing and operating the equipment mounted on the wrist of the robot. Because of having the programmable control system, the robot also performs additional activities connected with sensors, vision systems, operating the storages of manipulated objects, tools or grippers, measuring stands, etc. For this reason the description of the robot as a part of production system should take into account the specific nature of this machine: the robot is a substitute of a worker, who performs his tasks in a particular environment. In this case, the model should be able to characterize the essence of "employment" in the sufficient way. One of the possible approaches to this problem is to treat the robot as an object, in the sense often used in computer science. This allows both: to describe certain operations performed on the object, as well as describing the operations performed by the object. This paper focuses mainly on the definition of the object as the model of the robot. This model is confronted with the other possible descriptions. The

  7. Object as a model of intelligent robot in the virtual workspace

    International Nuclear Information System (INIS)

    Foit, K; Gwiazda, A; Banas, W; Sekala, A; Hryniewicz, P

    2015-01-01

    The contemporary industry requires that every element of a production line will fit into the global schema, which is connected with the global structure of business. There is the need to find the practical and effective ways of the design and management of the production process. The term “effective” should be understood in a manner that there exists a method, which allows building a system of nodes and relations in order to describe the role of the particular machine in the production process. Among all the machines involved in the manufacturing process, industrial robots are the most complex ones. This complexity is reflected in the realization of elaborated tasks, involving handling, transporting or orienting the objects in a work space, and even performing simple machining processes, such as deburring, grinding, painting, applying adhesives and sealants etc. The robot also performs some activities connected with automatic tool changing and operating the equipment mounted on the wrist of the robot. Because of having the programmable control system, the robot also performs additional activities connected with sensors, vision systems, operating the storages of manipulated objects, tools or grippers, measuring stands, etc. For this reason the description of the robot as a part of production system should take into account the specific nature of this machine: the robot is a substitute of a worker, who performs his tasks in a particular environment. In this case, the model should be able to characterize the essence of 'employment' in the sufficient way. One of the possible approaches to this problem is to treat the robot as an object, in the sense often used in computer science. This allows both: to describe certain operations performed on the object, as well as describing the operations performed by the object. This paper focuses mainly on the definition of the object as the model of the robot. This model is confronted with the other possible

  8. A comparative study of the proposed models for the components of the national health information system.

    Science.gov (United States)

    Ahmadi, Maryam; Damanabi, Shahla; Sadoughi, Farahnaz

    2014-04-01

    National Health Information System plays an important role in ensuring timely and reliable access to Health information, which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system - for better planning and management influential factors of performanceseems necessary, therefore, in this study different attitudes towards components of this system are explored comparatively. This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process and output. In this context, search for information using library resources and internet search were conducted, and data analysis was expressed using comparative tables and qualitative data. The findings showed that there are three different perspectives presenting the components of national health information system Lippeveld and Sauerborn and Bodart model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008, and Gattini's 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities and equipment. Plus, in the "process" section from three models, we pointed up the actions ensuring the quality of health information system, and in output section, except for Lippeveld Model, two other models consider information products and use and distribution of information as components of the national health information system. the results showed that all the three models have had a brief discussion about the

  9. Superfluid drag in the two-component Bose-Hubbard model

    Science.gov (United States)

    Sellin, Karl; Babaev, Egor

    2018-03-01

    In multicomponent superfluids and superconductors, co- and counterflows of components have, in general, different properties. A. F. Andreev and E. P. Bashkin [Sov. Phys. JETP 42, 164 (1975)] discussed, in the context of He3/He4 superfluid mixtures, that interparticle interactions produce a dissipationless drag. The drag can be understood as a superflow of one component induced by phase gradients of the other component. Importantly, the drag can be both positive (entrainment) and negative (counterflow). The effect is known to have crucial importance for many properties of diverse physical systems ranging from the dynamics of neutron stars and rotational responses of Bose mixtures of ultracold atoms to magnetic responses of multicomponent superconductors. Although substantial literature exists that includes the drag interaction phenomenologically, only a few regimes are covered by quantitative studies of the microscopic origin of the drag and its dependence on microscopic parameters. Here we study the microscopic origin and strength of the drag interaction in a quantum system of two-component bosons on a lattice with short-range interaction. By performing quantum Monte Carlo simulations of a two-component Bose-Hubbard model we obtain dependencies of the drag strength on the boson-boson interactions and properties of the optical lattice. Of particular interest are the strongly correlated regimes where the ratio of coflow and counterflow superfluid stiffnesses can diverge, corresponding to the case of saturated drag.

  10. A Proposed Model for Assessing Organisational Culture Towards Achieving Business Objectives

    Directory of Open Access Journals (Sweden)

    Hafez Salleh

    2011-09-01

    Full Text Available Most of the traditional business performances measures are based on productivity and process criteria, which mainly focus on method of investment appraisal such as payback method, return on investment (ROI, cost-benefits analysis (CBA, net present value (NPV, internal rate of return (IRR. However, the measurement scales of business performance are not limited to those measures. One element that has strong correlation to the business performances is ‘organisational culture’. Many studies proved that one of the significant criteria for achieving desired business objectives is the right organisational culture within workplace. Basically, the measurement of organisational culture is reflecting on two distinct elements: organisational culture and business objectives. In broader perspective, an organisation is considered effective if it meets its business objectives. This paper aims to present and discuss the preliminary culture model to indicate the culture performance within organisational. The model has been developed through literature review, expert opinion and experience which is anticipated of being able to potentially measure the culture capability of organisations across industries to “successfully achieve business objectives”. The model is composed of six progressive stages of maturity that an organisation can achieve its culture performance. For each maturity stage, the model describes a set of characteristics that must be in place for the company to achieve each stage. The validity of the proposed model will be tested by a few case studies. The idea is to provide managers with a qualitative measurement tools to enable them to identify where culture improvements are required within their organisations and to indicate their readiness for achieving business objectives.

  11. A finite element method based microwave heat transfer modeling of frozen multi-component foods

    Science.gov (United States)

    Pitchai, Krishnamoorthy

    Microwave heating is fast and convenient, but is highly non-uniform. Non-uniform heating in microwave cooking affects not only food quality but also food safety. Most food industries develop microwavable food products based on "cook-and-look" approach. This approach is time-consuming, labor intensive and expensive and may not result in optimal food product design that assures food safety and quality. Design of microwavable food can be realized through a simulation model which describes the physical mechanisms of microwave heating in mathematical expressions. The objective of this study was to develop a microwave heat transfer model to predict spatial and temporal profiles of various heterogeneous foods such as multi-component meal (chicken nuggets and mashed potato), multi-component and multi-layered meal (lasagna), and multi-layered food with active packages (pizza) during microwave heating. A microwave heat transfer model was developed by solving electromagnetic and heat transfer equations using finite element method in commercially available COMSOL Multiphysics v4.4 software. The microwave heat transfer model included detailed geometry of the cavity, phase change, and rotation of the food on the turntable. The predicted spatial surface temperature patterns and temporal profiles were validated against the experimental temperature profiles obtained using a thermal imaging camera and fiber-optic sensors. The predicted spatial surface temperature profile of different multi-component foods was in good agreement with the corresponding experimental profiles in terms of hot and cold spot patterns. The root mean square error values of temporal profiles ranged from 5.8 °C to 26.2 °C in chicken nuggets as compared 4.3 °C to 4.7 °C in mashed potatoes. In frozen lasagna, root mean square error values at six locations ranged from 6.6 °C to 20.0 °C for 6 min of heating. A microwave heat transfer model was developed to include susceptor assisted microwave heating of a

  12. Multi-objective component sizing based on optimal energy management strategy of fuel cell electric vehicles

    International Nuclear Information System (INIS)

    Xu, Liangfei; Mueller, Clemens David; Li, Jianqiu; Ouyang, Minggao; Hu, Zunyan

    2015-01-01

    Highlights: • A non-linear model regarding fuel economy and system durability of FCEV. • A two-step algorithm for a quasi-optimal solution to a multi-objective problem. • Optimal parameters for DP algorithm considering accuracy and calculating time. • Influences of FC power and battery capacity on system performance. - Abstract: A typical topology of a proton electrolyte membrane (PEM) fuel cell electric vehicle contains at least two power sources, a fuel cell system (FCS) and a lithium battery package. The FCS provides stationary power, and the battery delivers dynamic power. In this paper, we report on the multi-objective optimization problem of powertrain parameters for a pre-defined driving cycle regarding fuel economy and system durability. We introduce the dynamic model for the FCEV. We take into consideration equations not only for fuel economy but also for system durability. In addition, we define a multi-objective optimization problem, and find a quasi-optimal solution using a two-loop framework. In the inside loop, for each group of powertrain parameters, a global optimal energy management strategy based on dynamic programming (DP) is exploited. We optimize coefficients for the DP algorithm to reduce calculating time as well as to maintain accuracy. For the outside loop, we compare the results of all the groups with each other, and choose the Pareto optimal solution based on a compromise of fuel economy and system durability. Simulation results show that for a “China city bus typical cycle,” a battery capacity of 150 Ah and an FCS maximal net output power of 40 kW are optimal for the fuel economy and system durability of a fuel cell city bus.

  13. Fabrication of plastic objects by radiation-induced molding

    International Nuclear Information System (INIS)

    Leszyk, G.M.; Morrison, E.D.; Williams, R.F. Jr.

    1976-01-01

    A process is described for fabricating thin plastic objects. It comprises the following successive operations: a supporting tray is moved into a pouring area; a succession of components of viscous composition in the predetermined shape corresponding to the objects to be produced is poured on to this supporting tray, the viscosity of the composition being such that these distinct components retain their poured shape when they are no longer supported on the supporting tray; the supporting tray bearing the distinct viscous composition components is then moved into a hardening area; the distinct viscous composition components are then irradiated in this hardening area so as to transform them into solid plastic objects. The supporting tray carrying the separate plastic objects, now solid, is withdrawn from the hardening area [fr

  14. Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo

    KAUST Repository

    Martinez, Josue G.; Liang, Faming; Zhou, Lan; Carroll, Raymond J.

    2010-01-01

    model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order

  15. Diffuse Reflectance Spectroscopy of Hidden Objects. Part II: Recovery of a Target Spectrum.

    Science.gov (United States)

    Pomerantsev, Alexey L; Rodionova, Oxana Ye; Skvortsov, Alexej N

    2017-08-01

    In this study, we consider the reconstruction of a diffuse reflectance near-infrared spectrum of an object (target spectrum) in case the object is covered by an interfering absorbing and scattering layer. Recovery is performed using a new empirical method, which was developed in our previous study. We focus on a system, which consists of several layers of polyethylene (PE) film and underlayer objects with different spectral features. The spectral contribution of the interfering layer is modeled by a three-component two-parameter multivariate curve resolution (MCR) model, which was built and calibrated using spectrally flat objects. We show that this model is applicable to real objects with non-uniform spectra. Ultimately, the target spectrum can be reconstructed from a single spectrum of the covered target. With calculation methods, we are able to recover quite accurately the spectrum of a target even when the object is covered by 0.7 mm of PE.

  16. New methods for the characterization of pyrocarbon; The two component model of pyrocarbon

    Energy Technology Data Exchange (ETDEWEB)

    Luhleich, H.; Sutterlin, L.; Hoven, H.; Nickel, H.

    1972-04-19

    In the first part, new experiments to clarify the origin of different pyrocarbon components are described. Three new methods (plasma-oxidation, wet-oxidation, ultrasonic method) are presented to expose the carbon black like component in the pyrocarbon deposited in fluidized beds. In the second part, a two component model of pyrocarbon is proposed and illustrated by examples.

  17. Raw material selection for object construction

    CSIR Research Space (South Africa)

    Perlow, J

    2017-11-01

    Full Text Available on their visual appearance. In particular, we present a method for an agent to recognise the required unseen raw material images and link them to corresponding novel object images. This capability provides an agent with an increased degree of resourcefulness... construction from component parts, and in doing so we provide a benchmark for future work to compare against within Minecraft and ShapeNet domains. II. BACKGROUND Our model is inspired by Siamese neural networks, a class of neural network that includes multiple...

  18. An Object-Relational Ifc Storage Model Based on Oracle Database

    Science.gov (United States)

    Li, Hang; Liu, Hua; Liu, Yong; Wang, Yuan

    2016-06-01

    With the building models are getting increasingly complicated, the levels of collaboration across professionals attract more attention in the architecture, engineering and construction (AEC) industry. In order to adapt the change, buildingSMART developed Industry Foundation Classes (IFC) to facilitate the interoperability between software platforms. However, IFC data are currently shared in the form of text file, which is defective. In this paper, considering the object-based inheritance hierarchy of IFC and the storage features of different database management systems (DBMS), we propose a novel object-relational storage model that uses Oracle database to store IFC data. Firstly, establish the mapping rules between data types in IFC specification and Oracle database. Secondly, design the IFC database according to the relationships among IFC entities. Thirdly, parse the IFC file and extract IFC data. And lastly, store IFC data into corresponding tables in IFC database. In experiment, three different building models are selected to demonstrate the effectiveness of our storage model. The comparison of experimental statistics proves that IFC data are lossless during data exchange.

  19. A review of typical thermal fatigue failure models for solder joints of electronic components

    Science.gov (United States)

    Li, Xiaoyan; Sun, Ruifeng; Wang, Yongdong

    2017-09-01

    For electronic components, cyclic plastic strain makes it easier to accumulate fatigue damage than elastic strain. When the solder joints undertake thermal expansion or cold contraction, different thermal strain of the electronic component and its corresponding substrate is caused by the different coefficient of thermal expansion of the electronic component and its corresponding substrate, leading to the phenomenon of stress concentration. So repeatedly, cracks began to sprout and gradually extend [1]. In this paper, the typical thermal fatigue failure models of solder joints of electronic components are classified and the methods of obtaining the parameters in the model are summarized based on domestic and foreign literature research.

  20. A knowledge discovery object model API for Java

    Directory of Open Access Journals (Sweden)

    Jones Steven JM

    2003-10-01

    Full Text Available Abstract Background Biological data resources have become heterogeneous and derive from multiple sources. This introduces challenges in the management and utilization of this data in software development. Although efforts are underway to create a standard format for the transmission and storage of biological data, this objective has yet to be fully realized. Results This work describes an application programming interface (API that provides a framework for developing an effective biological knowledge ontology for Java-based software projects. The API provides a robust framework for the data acquisition and management needs of an ontology implementation. In addition, the API contains classes to assist in creating GUIs to represent this data visually. Conclusions The Knowledge Discovery Object Model (KDOM API is particularly useful for medium to large applications, or for a number of smaller software projects with common characteristics or objectives. KDOM can be coupled effectively with other biologically relevant APIs and classes. Source code, libraries, documentation and examples are available at http://www.bcgsc.ca/bioinfo/software.

  1. An Overview of the Object Protocol Model (OPM) and the OPM Data Management Tools.

    Science.gov (United States)

    Chen, I-Min A.; Markowitz, Victor M.

    1995-01-01

    Discussion of database management tools for scientific information focuses on the Object Protocol Model (OPM) and data management tools based on OPM. Topics include the need for new constructs for modeling scientific experiments, modeling object structures and experiments in OPM, queries and updates, and developing scientific database applications…

  2. The OCoN approach to workflow modeling in object-oriented systems

    NARCIS (Netherlands)

    Wirtz, G.; Weske, M.H.; Giese, H.

    2001-01-01

    Workflow management aims at modeling and executing application processes in complex technical and organizational environments. Modern information systems are often based on object-oriented design techniques, for instance, the Unified Modeling Language (UML). These systems consist of application

  3. Stability equation and two-component Eigenmode for domain walls in scalar potential model

    International Nuclear Information System (INIS)

    Dias, G.S.; Graca, E.L.; Rodrigues, R. de Lima

    2002-08-01

    Supersymmetric quantum mechanics involving a two-component representation and two-component eigenfunctions is applied to obtain the stability equation associated to a potential model formulated in terms of two coupled real scalar fields. We investigate the question of stability by introducing an operator technique for the Bogomol'nyi-Prasad-Sommerfield (BPS) and non-BPS states on two domain walls in a scalar potential model with minimal N 1-supersymmetry. (author)

  4. A Co-modeling Method Based on Component Features for Mechatronic Devices in Aero-engines

    Science.gov (United States)

    Wang, Bin; Zhao, Haocen; Ye, Zhifeng

    2017-08-01

    Data-fused and user-friendly design of aero-engine accessories is required because of their structural complexity and stringent reliability. This paper gives an overview of a typical aero-engine control system and the development process of key mechatronic devices used. Several essential aspects of modeling and simulation in the process are investigated. Considering the limitations of a single theoretic model, feature-based co-modeling methodology is suggested to satisfy the design requirements and compensate for diversity of component sub-models for these devices. As an example, a stepper motor controlled Fuel Metering Unit (FMU) is modeled in view of the component physical features using two different software tools. An interface is suggested to integrate the single discipline models into the synthesized one. Performance simulation of this device using the co-model and parameter optimization for its key components are discussed. Comparison between delivery testing and the simulation shows that the co-model for the FMU has a high accuracy and the absolute superiority over a single model. Together with its compatible interface with the engine mathematical model, the feature-based co-modeling methodology is proven to be an effective technical measure in the development process of the device.

  5. An object-oriented approach to evaluating multiple spectral models

    International Nuclear Information System (INIS)

    Majoras, R.E.; Richardson, W.M.; Seymour, R.S.

    1995-01-01

    A versatile, spectroscopy analysis engine has been developed by using object-oriented design and analysis techniques coupled with an object-oriented language, C++. This engine provides the spectroscopist with the choice of several different peak shape models that are tailored to the type of spectroscopy being performed. It also allows ease of development in adapting the engine to other analytical methods requiring more complex peak fitting in the future. This results in a program that can currently be used across a wide range of spectroscopy applications and anticipates inclusion of future advances in the field. (author) 6 refs.; 1 fig

  6. Optics Elements for Modeling Electrostatic Lenses and Accelerator Components: III. Electrostatic Deflectors

    International Nuclear Information System (INIS)

    Brown, T.A.; Gillespie, G.H.

    1999-01-01

    Ion-beam optics models for simulating electrostatic prisms (deflectors) of different geometries have been developed for the computer code TRACE 3-D. TRACE 3-D is an envelope (matrix) code, which includes a linear space charge model, that was originally developed to model bunched beams in magnetic transport systems and radiofrequency (RF) accelerators. Several new optical models for a number of electrostatic lenses and accelerator columns have been developed recently that allow the code to be used for modeling beamlines and accelerators with electrostatic components. The new models include a number of options for: (1) Einzel lenses, (2) accelerator columns, (3) electrostatic prisms, and (4) electrostatic quadrupoles. A prescription for setting up the initial beam appropriate to modeling 2-D (continuous) beams has also been developed. The models for electrostatic prisms are described in this paper. The electrostatic prism model options allow the modeling of cylindrical, spherical, and toroidal electrostatic deflectors. The application of these models in the development of ion-beam transport systems is illustrated through the modeling of a spherical electrostatic analyzer as a component of the new low energy beamline at CAMS

  7. Measurement and modeling of shortwave irradiance components in cloud-free atmospheres

    Energy Technology Data Exchange (ETDEWEB)

    Halthore, R.N.

    1999-08-04

    Atmosphere scatters and absorbs incident solar radiation modifying its spectral content and decreasing its intensity at the surface. It is very useful to classify the earth-atmospheric solar radiation into several components--direct solar surface irradiance (E{sub direct}), diffuse-sky downward surface irradiance (E{sub diffuse}), total surface irradiance, and upwelling flux at the surface and at the top-of-the atmosphere. E{sub direct} depends only on the extinction properties of the atmosphere without regard to details of extinction, namely scattering or absorption; furthermore it can be accurately measured to high accuracy (0.3%) with the aid of an active cavity radiometer (ACR). E{sub diffuse} has relatively larger uncertainties both in its measurement using shaded pyranometers and in model estimates, owing to the difficulty in accurately characterizing pyranometers and in measuring model inputs such as surface reflectance, aerosol single scattering albedo, and phase function. Radiative transfer model simulations of the above surface radiation components in cloud-free skies using measured atmospheric properties show that while E{sub direct} estimates are closer to measurements, E{sub diffuse} is overestimated by an amount larger than the combined uncertainties in model inputs and measurements, illustrating a fundamental gap in the understanding of the magnitude of atmospheric absorption in cloud-free skies. The excess continuum type absorption required to reduce the E{sub diffuse} model overestimate ({approximately}3--8% absorptance) would significantly impact climate prediction and remote sensing. It is not clear at present what the source for this continuum absorption is. Here issues related to measurements and modeling of the surface irradiance components are discussed.

  8. SU-F-R-46: Predicting Distant Failure in Lung SBRT Using Multi-Objective Radiomics Model

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Z; Folkert, M; Iyengar, P; Zhang, Y; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States)

    2016-06-15

    Purpose: To predict distant failure in lung stereotactic body radiation therapy (SBRT) in early stage non-small cell lung cancer (NSCLC) by using a new multi-objective radiomics model. Methods: Currently, most available radiomics models use the overall accuracy as the objective function. However, due to data imbalance, a single object may not reflect the performance of a predictive model. Therefore, we developed a multi-objective radiomics model which considers both sensitivity and specificity as the objective functions simultaneously. The new model is used to predict distant failure in lung SBRT using 52 patients treated at our institute. Quantitative imaging features of PET and CT as well as clinical parameters are utilized to build the predictive model. Image features include intensity features (9), textural features (12) and geometric features (8). Clinical parameters for each patient include demographic parameters (4), tumor characteristics (8), treatment faction schemes (4) and pretreatment medicines (6). The modelling procedure consists of two steps: extracting features from segmented tumors in PET and CT; and selecting features and training model parameters based on multi-objective. Support Vector Machine (SVM) is used as the predictive model, while a nondominated sorting-based multi-objective evolutionary computation algorithm II (NSGA-II) is used for solving the multi-objective optimization. Results: The accuracy for PET, clinical, CT, PET+clinical, PET+CT, CT+clinical, PET+CT+clinical are 71.15%, 84.62%, 84.62%, 85.54%, 82.69%, 84.62%, 86.54%, respectively. The sensitivities for the above seven combinations are 41.76%, 58.33%, 50.00%, 50.00%, 41.67%, 41.67%, 58.33%, while the specificities are 80.00%, 92.50%, 90.00%, 97.50%, 92.50%, 97.50%, 97.50%. Conclusion: A new multi-objective radiomics model for predicting distant failure in NSCLC treated with SBRT was developed. The experimental results show that the best performance can be obtained by combining

  9. User's Manual for the Object User Interface (OUI): An Environmental Resource Modeling Framework

    Science.gov (United States)

    Markstrom, Steven L.; Koczot, Kathryn M.

    2008-01-01

    The Object User Interface is a computer application that provides a framework for coupling environmental-resource models and for managing associated temporal and spatial data. The Object User Interface is designed to be easily extensible to incorporate models and data interfaces defined by the user. Additionally, the Object User Interface is highly configurable through the use of a user-modifiable, text-based control file that is written in the eXtensible Markup Language. The Object User Interface user's manual provides (1) installation instructions, (2) an overview of the graphical user interface, (3) a description of the software tools, (4) a project example, and (5) specifications for user configuration and extension.

  10. Improving a Deep Learning based RGB-D Object Recognition Model by Ensemble Learning

    DEFF Research Database (Denmark)

    Aakerberg, Andreas; Nasrollahi, Kamal; Heder, Thomas

    2018-01-01

    Augmenting RGB images with depth information is a well-known method to significantly improve the recognition accuracy of object recognition models. Another method to im- prove the performance of visual recognition models is ensemble learning. However, this method has not been widely explored...... in combination with deep convolutional neural network based RGB-D object recognition models. Hence, in this paper, we form different ensembles of complementary deep convolutional neural network models, and show that this can be used to increase the recognition performance beyond existing limits. Experiments...

  11. A multi-objective approach to improve SWAT model calibration in alpine catchments

    Science.gov (United States)

    Tuo, Ye; Marcolini, Giorgia; Disse, Markus; Chiogna, Gabriele

    2018-04-01

    Multi-objective hydrological model calibration can represent a valuable solution to reduce model equifinality and parameter uncertainty. The Soil and Water Assessment Tool (SWAT) model is widely applied to investigate water quality and water management issues in alpine catchments. However, the model calibration is generally based on discharge records only, and most of the previous studies have defined a unique set of snow parameters for an entire basin. Only a few studies have considered snow observations to validate model results or have taken into account the possible variability of snow parameters for different subbasins. This work presents and compares three possible calibration approaches. The first two procedures are single-objective calibration procedures, for which all parameters of the SWAT model were calibrated according to river discharge alone. Procedures I and II differ from each other by the assumption used to define snow parameters: The first approach assigned a unique set of snow parameters to the entire basin, whereas the second approach assigned different subbasin-specific sets of snow parameters to each subbasin. The third procedure is a multi-objective calibration, in which we considered snow water equivalent (SWE) information at two different spatial scales (i.e. subbasin and elevation band), in addition to discharge measurements. We tested these approaches in the Upper Adige river basin where a dense network of snow depth measurement stations is available. Only the set of parameters obtained with this multi-objective procedure provided an acceptable prediction of both river discharge and SWE. These findings offer the large community of SWAT users a strategy to improve SWAT modeling in alpine catchments.

  12. SASSYS-1 balance-of-plant component models for an integrated plant response

    International Nuclear Information System (INIS)

    Ku, J.-Y.

    1989-01-01

    Models of power plant heat transfer components and rotating machinery have been added to the balance-of-plant model in the SASSYS-1 liquid metal reactor systems analysis code. This work is part of a continuing effort in plant network simulation based on the general mathematical models developed. The models described in this paper extend the scope of the balance-of-plant model to handle non-adiabatic conditions along flow paths. While the mass and momentum equations remain the same, the energy equation now contains a heat source term due to energy transfer across the flow boundary or to work done through a shaft. The heat source term is treated fully explicitly. In addition, the equation of state is rewritten in terms of the quality and separate parameters for each phase. The models are simple enough to run quickly, yet include sufficient detail of dominant plant component characteristics to provide accurate results. 5 refs., 16 figs., 2 tabs

  13. Electricity supply industry modelling for multiple objectives under demand growth uncertainty

    International Nuclear Information System (INIS)

    Heinrich, G.; Basson, L.; Howells, M.; Petrie, J.

    2007-01-01

    Appropriate energy-environment-economic (E3) modelling provides key information for policy makers in the electricity supply industry (ESI) faced with navigating a sustainable development path. Key challenges include engaging with stakeholder values and preferences, and exploring trade-offs between competing objectives in the face of underlying uncertainty. As a case study we represent the South African ESI using a partial equilibrium E3 modelling approach, and extend the approach to include multiple objectives under selected future uncertainties. This extension is achieved by assigning cost penalties to non-cost attributes to force the model's least-cost objective function to better satisfy non-cost criteria. This paper incorporates aspects of flexibility to demand growth uncertainty into each future expansion alternative by introducing stochastic programming with recourse into the model. Technology lead times are taken into account by the inclusion of a decision node along the time horizon where aspects of real options theory are considered within the planning process. Hedging in the recourse programming is automatically translated from being purely financial, to include the other attributes that the cost penalties represent. From a retrospective analysis of the cost penalties, the correct market signals, can be derived to meet policy goal, with due regard to demand uncertainty. (author)

  14. The significance of the choice of radiobiological (NTCP) models in treatment plan objective functions

    International Nuclear Information System (INIS)

    Miller, J.; Fuller, M.; Vinod, S.; Holloway, L.

    2009-01-01

    Full text: A Clinician's discrimination between radiation therapy treatment plans is traditionally a subjective process, based on experience and existing protocols. A more objective and quantitative approach to distinguish between treatment plans is to use radiobiological or dosimetric objective functions, based on radiobiological or dosimetric models. The efficacy of models is not well understood, nor is the correlation of the rank of plans resulting from the use of models compared to the traditional subjective approach. One such radiobiological model is the Normal Tissue Complication Probability (NTCP). Dosimetric models or indicators are more accepted in clinical practice. In this study, three radiobiological models, Lyman NTCP, critical volume NTCP and relative seriality NTCP, and three dosimetric models, Mean Lung Dose (MLD) and the Lung volumes irradiated at lOGy (V|0) and 20 G y (V20), were used to rank a series of treatment plans using, harm to normal (Lung) tissue as the objective criterion. None of the models considered in this study showed consistent correlation with the Radiation Oncologists plan ranking. If radiobiological or dosimetric models are to be used in objective functions for lung treatments, based on this study it is recommended that the Lyman NTCP model be used because it will provide most consistency with traditional clinician ranking.

  15. Two component WIMP-FImP dark matter model with singlet fermion, scalar and pseudo scalar

    Energy Technology Data Exchange (ETDEWEB)

    Dutta Banik, Amit; Pandey, Madhurima; Majumdar, Debasish [Saha Institute of Nuclear Physics, HBNI, Astroparticle Physics and Cosmology Division, Kolkata (India); Biswas, Anirban [Harish Chandra Research Institute, Allahabad (India)

    2017-10-15

    We explore a two component dark matter model with a fermion and a scalar. In this scenario the Standard Model (SM) is extended by a fermion, a scalar and an additional pseudo scalar. The fermionic component is assumed to have a global U(1){sub DM} and interacts with the pseudo scalar via Yukawa interaction while a Z{sub 2} symmetry is imposed on the other component - the scalar. These ensure the stability of both dark matter components. Although the Lagrangian of the present model is CP conserving, the CP symmetry breaks spontaneously when the pseudo scalar acquires a vacuum expectation value (VEV). The scalar component of the dark matter in the present model also develops a VEV on spontaneous breaking of the Z{sub 2} symmetry. Thus the various interactions of the dark sector and the SM sector occur through the mixing of the SM like Higgs boson, the pseudo scalar Higgs like boson and the singlet scalar boson. We show that the observed gamma ray excess from the Galactic Centre as well as the 3.55 keV X-ray line from Perseus, Andromeda etc. can be simultaneously explained in the present two component dark matter model and the dark matter self interaction is found to be an order of magnitude smaller than the upper limit estimated from the observational results. (orig.)

  16. Modreg: A Modular Framework for RGB-D Image Acquisition and 3D Object Model Registration

    Directory of Open Access Journals (Sweden)

    Kornuta Tomasz

    2017-09-01

    Full Text Available RGB-D sensors became a standard in robotic applications requiring object recognition, such as object grasping and manipulation. A typical object recognition system relies on matching of features extracted from RGB-D images retrieved from the robot sensors with the features of the object models. In this paper we present ModReg: a system for registration of 3D models of objects. The system consists of a modular software associated with a multi-camera setup supplemented with an additional pattern projector, used for the registration of high-resolution RGB-D images. The objects are placed on a fiducial board with two dot patterns enabling extraction of masks of the placed objects and estimation of their initial poses. The acquired dense point clouds constituting subsequent object views undergo pairwise registration and at the end are optimized with a graph-based technique derived from SLAM. The combination of all those elements resulted in a system able to generate consistent 3D models of objects.

  17. Experimental Effects and Individual Differences in Linear Mixed Models: Estimating the Relationship between Spatial, Object, and Attraction Effects in Visual Attention

    Science.gov (United States)

    Kliegl, Reinhold; Wei, Ping; Dambacher, Michael; Yan, Ming; Zhou, Xiaolin

    2011-01-01

    Linear mixed models (LMMs) provide a still underused methodological perspective on combining experimental and individual-differences research. Here we illustrate this approach with two-rectangle cueing in visual attention (Egly et al., 1994). We replicated previous experimental cue-validity effects relating to a spatial shift of attention within an object (spatial effect), to attention switch between objects (object effect), and to the attraction of attention toward the display centroid (attraction effect), also taking into account the design-inherent imbalance of valid and other trials. We simultaneously estimated variance/covariance components of subject-related random effects for these spatial, object, and attraction effects in addition to their mean reaction times (RTs). The spatial effect showed a strong positive correlation with mean RT and a strong negative correlation with the attraction effect. The analysis of individual differences suggests that slow subjects engage attention more strongly at the cued location than fast subjects. We compare this joint LMM analysis of experimental effects and associated subject-related variances and correlations with two frequently used alternative statistical procedures. PMID:21833292

  18. Development of Large Concrete Object Geometrical Model Based on Terrestrial Laser Scanning

    Directory of Open Access Journals (Sweden)

    Zaczek-Peplinska Janina

    2015-02-01

    Full Text Available The paper presents control periodic measurements of movements and survey of concrete dam on Dunajec River in Rożnów, Poland. Topographical survey was conducted using laser scanning technique. The goal of survey was data collection and creation of a geometrical model. Acquired cross- and horizontal sections were utilised to create a numerical model of object behaviour at various load depending of changing level of water in reservoir. Modelling was accomplished using finite elements technique. During the project an assessment was conducted to terrestrial laser scanning techniques for such type of research of large hydrotechnical objects such as gravitational water dams. Developed model can be used to define deformations and displacement prognosis.

  19. Modeling cellular networks in fading environments with dominant specular components

    KAUST Repository

    Alammouri, Ahmad; Elsawy, Hesham; Salem, Ahmed Sultan; Di Renzo, Marco; Alouini, Mohamed-Slim

    2016-01-01

    to the Nakagami-m fading in some special cases. However, neither the Rayleigh nor the Nakagami-m accounts for dominant specular components (DSCs) which may appear in realistic fading channels. In this paper, we present a tractable model for cellular networks

  20. Object Tracking Using Adaptive Covariance Descriptor and Clustering-Based Model Updating for Visual Surveillance

    Directory of Open Access Journals (Sweden)

    Lei Qin

    2014-05-01

    Full Text Available We propose a novel approach for tracking an arbitrary object in video sequences for visual surveillance. The first contribution of this work is an automatic feature extraction method that is able to extract compact discriminative features from a feature pool before computing the region covariance descriptor. As the feature extraction method is adaptive to a specific object of interest, we refer to the region covariance descriptor computed using the extracted features as the adaptive covariance descriptor. The second contribution is to propose a weakly supervised method for updating the object appearance model during tracking. The method performs a mean-shift clustering procedure among the tracking result samples accumulated during a period of time and selects a group of reliable samples for updating the object appearance model. As such, the object appearance model is kept up-to-date and is prevented from contamination even in case of tracking mistakes. We conducted comparing experiments on real-world video sequences, which confirmed the effectiveness of the proposed approaches. The tracking system that integrates the adaptive covariance descriptor and the clustering-based model updating method accomplished stable object tracking on challenging video sequences.

  1. Modelling and Order of Acoustic Transfer Functions Due to Reflections from Augmented Objects

    Directory of Open Access Journals (Sweden)

    Diemer de Vries

    2007-01-01

    Full Text Available It is commonly accepted that the sound reflections from real physical objects are much more complicated than what usually is and can be modelled by room acoustics modelling software. The main reason for this limitation is the level of detail inherent in the physical object in terms of its geometrical and acoustic properties. In the present paper, the complexity of the sound reflections from a corridor wall is investigated by modelling the corresponding acoustic transfer functions at several receiver positions in front of the wall. The complexity for different wall configurations has been examined and the changes have been achieved by altering its acoustic image. The results show that for a homogenous flat wall, the complexity is significant and for a wall including various smaller objects, the complexity is highly dependent on the position of the receiver with respect to the objects.

  2. Country Selection Model for Sustainable Construction Businesses Using Hybrid of Objective and Subjective Information

    Directory of Open Access Journals (Sweden)

    Kang-Wook Lee

    2017-05-01

    Full Text Available An important issue for international businesses and academia is selecting countries in which to expand in order to achieve entrepreneurial sustainability. This study develops a country selection model for sustainable construction businesses using both objective and subjective information. The objective information consists of 14 variables related to country risk and project performance in 32 countries over 25 years. This hybrid model applies subjective weighting from industrial experts to objective information using a fuzzy LinPreRa-based Analytic Hierarchy Process. The hybrid model yields a more accurate country selection compared to a purely objective information-based model in experienced countries. Interestingly, the hybrid model provides some different predictions with only subjective opinions in unexperienced countries, which implies that expert opinion is not always reliable. In addition, feedback from five experts in top international companies is used to validate the model’s completeness, effectiveness, generality, and applicability. The model is expected to aid decision makers in selecting better candidate countries that lead to sustainable business success.

  3. OSCAR2000 : a multi-component 3-dimensional oil spill contingency and response model

    International Nuclear Information System (INIS)

    Reed, M.; Daling, P.S.; Brakstad, O.G.; Singsaas, I.; Faksness, L.-G.; Hetland, B.; Ekrol, N.

    2000-01-01

    Researchers at SINTEF in Norway have studied the weathering of surface oil. They developed a realistic model to analyze alternative spill response strategies. The model represented the formation and composition of the water-accommodated fraction (WAF) of oil for both treated and untreated oil spills. As many as 25 components, pseudo-components, or metabolites were allowed for the specification of oil. Calculations effected using OSCAR were verified in great detail on numerous occasions. The model made it possible to determine rather realistically the dissolution, transformation, and toxicology of dispersed oil clouds, as well as evaporation, emulsification, and natural dispersion. OSCAR comprised a data-based oil weathering model, a three-dimensional oil trajectory and chemical fates model, an oil spill combat model, exposure models for birds, marine mammals, fish and ichthyoplankton. 17 refs., 1 tab., 11 figs

  4. The SPAtial EFficiency metric (SPAEF): multiple-component evaluation of spatial patterns for optimization of hydrological models

    Science.gov (United States)

    Koch, Julian; Cüneyd Demirel, Mehmet; Stisen, Simon

    2018-05-01

    The process of model evaluation is not only an integral part of model development and calibration but also of paramount importance when communicating modelling results to the scientific community and stakeholders. The modelling community has a large and well-tested toolbox of metrics to evaluate temporal model performance. In contrast, spatial performance evaluation does not correspond to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study makes a contribution towards advancing spatial-pattern-oriented model calibration by rigorously testing a multiple-component performance metric. The promoted SPAtial EFficiency (SPAEF) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multiple-component approach is found to be advantageous in order to achieve the complex task of comparing spatial patterns. SPAEF, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are applied in a spatial-pattern-oriented model calibration of a catchment model in Denmark. Results suggest the importance of multiple-component metrics because stand-alone metrics tend to fail to provide holistic pattern information. The three SPAEF components are found to be independent, which allows them to complement each other in a meaningful way. In order to optimally exploit spatial observations made available by remote sensing platforms, this study suggests applying bias insensitive metrics which further allow for a comparison of variables which are related but may differ in unit. This study applies SPAEF in the hydrological context using the mesoscale Hydrologic Model (mHM; version 5.8), but we see great potential across disciplines related to spatially distributed earth system modelling.

  5. Real-time Pipeline for Object Modeling and Grasping Pose Selection via Superquadric Functions

    Directory of Open Access Journals (Sweden)

    Giulia Vezzani

    2017-11-01

    Full Text Available This work provides a novel real-time pipeline for modeling and grasping of unknown objects with a humanoid robot. Such a problem is of great interest for the robotic community, since conventional approaches fail when the shape, dimension, or pose of the objects are missing. Our approach reconstructs in real-time a model for the object under consideration and represents the robot hand both with proper and mathematically usable models, i.e., superquadric functions. The volume graspable by the hand is represented by an ellipsoid and is defined a priori, because the shape of the hand is known in advance. The superquadric representing the object is obtained in real-time from partial vision information instead, e.g., one stereo view of the object under consideration, and provides an approximated 3D full model. The optimization problem we formulate for the grasping pose computation is solved online by using the Ipopt software package and, thus, does not require off-line computation or learning. Even though our approach is for a generic humanoid robot, we developed a complete software architecture for executing this approach on the iCub humanoid robot. Together with that, we also provide a tutorial on how to use this framework. We believe that our work, together with the available code, is of a strong utility for the iCub community for three main reasons: object modeling and grasping are relevant problems for the robotic community, our code can be easily applied on every iCub, and the modular structure of our framework easily allows extensions and communications with external code.

  6. Object location and object recognition memory impairments, motivation deficits and depression in a model of Gulf War illness.

    Science.gov (United States)

    Hattiangady, Bharathi; Mishra, Vikas; Kodali, Maheedhar; Shuai, Bing; Rao, Xiolan; Shetty, Ashok K

    2014-01-01

    Memory and mood deficits are the enduring brain-related symptoms in Gulf War illness (GWI). Both animal model and epidemiological investigations have indicated that these impairments in a majority of GW veterans are linked to exposures to chemicals such as pyridostigmine bromide (PB, an antinerve gas drug), permethrin (PM, an insecticide) and DEET (a mosquito repellant) encountered during the Persian Gulf War-1. Our previous study in a rat model has shown that combined exposures to low doses of GWI-related (GWIR) chemicals PB, PM, and DEET with or without 5-min of restraint stress (a mild stress paradigm) causes hippocampus-dependent spatial memory dysfunction in a water maze test (WMT) and increased depressive-like behavior in a forced swim test (FST). In this study, using a larger cohort of rats exposed to GWIR-chemicals and stress, we investigated whether the memory deficiency identified earlier in a WMT is reproducible with an alternative and stress free hippocampus-dependent memory test such as the object location test (OLT). We also ascertained the possible co-existence of hippocampus-independent memory dysfunction using a novel object recognition test (NORT), and alterations in mood function with additional tests for motivation and depression. Our results provide new evidence that exposure to low doses of GWIR-chemicals and mild stress for 4 weeks causes deficits in hippocampus-dependent object location memory and perirhinal cortex-dependent novel object recognition memory. An open field test performed prior to other behavioral analyses revealed that memory impairments were not associated with increased anxiety or deficits in general motor ability. However, behavioral tests for mood function such as a voluntary physical exercise paradigm and a novelty suppressed feeding test (NSFT) demonstrated decreased motivation levels and depression. Thus, exposure to GWIR-chemicals and stress causes both hippocampus-dependent and hippocampus-independent memory

  7. MODELING THERMAL DUST EMISSION WITH TWO COMPONENTS: APPLICATION TO THE PLANCK HIGH FREQUENCY INSTRUMENT MAPS

    International Nuclear Information System (INIS)

    Meisner, Aaron M.; Finkbeiner, Douglas P.

    2015-01-01

    We apply the Finkbeiner et al. two-component thermal dust emission model to the Planck High Frequency Instrument maps. This parameterization of the far-infrared dust spectrum as the sum of two modified blackbodies (MBBs) serves as an important alternative to the commonly adopted single-MBB dust emission model. Analyzing the joint Planck/DIRBE dust spectrum, we show that two-component models provide a better fit to the 100-3000 GHz emission than do single-MBB models, though by a lesser margin than found by Finkbeiner et al. based on FIRAS and DIRBE. We also derive full-sky 6.'1 resolution maps of dust optical depth and temperature by fitting the two-component model to Planck 217-857 GHz along with DIRBE/IRAS 100 μm data. Because our two-component model matches the dust spectrum near its peak, accounts for the spectrum's flattening at millimeter wavelengths, and specifies dust temperature at 6.'1 FWHM, our model provides reliable, high-resolution thermal dust emission foreground predictions from 100 to 3000 GHz. We find that, in diffuse sky regions, our two-component 100-217 GHz predictions are on average accurate to within 2.2%, while extrapolating the Planck Collaboration et al. single-MBB model systematically underpredicts emission by 18.8% at 100 GHz, 12.6% at 143 GHz, and 7.9% at 217 GHz. We calibrate our two-component optical depth to reddening, and compare with reddening estimates based on stellar spectra. We find the dominant systematic problems in our temperature/reddening maps to be zodiacal light on large angular scales and the cosmic infrared background anisotropy on small angular scales

  8. COMPONENT SUPPLY MODEL FOR REPAIR ACTIVITIES NETWORK UNDER CONDITIONS OF PROBABILISTIC INDEFINITENESS.

    Directory of Open Access Journals (Sweden)

    Victor Yurievich Stroganov

    2017-02-01

    Full Text Available This article contains the systematization of the major production functions of repair activities network and the list of planning and control functions, which are described in the form of business processes (BP. Simulation model for analysis of the delivery effectiveness of components under conditions of probabilistic uncertainty was proposed. It has been shown that a significant portion of the total number of business processes is represented by the management and planning of the parts and components movement. Questions of construction of experimental design techniques on the simulation model in the conditions of non-stationarity were considered.

  9. High performance distributed objects in large hadron collider experiments

    International Nuclear Information System (INIS)

    Gutleber, J.

    1999-11-01

    This dissertation demonstrates how object-oriented technology can support the development of software that has to meet the requirements of high performance distributed data acquisition systems. The environment for this work is a system under planning for the Compact Muon Solenoid experiment at CERN that shall start its operation in the year 2005. The long operational phase of the experiment together with a tight and puzzling interaction with custom devices make the quest for an evolvable architecture that exhibits a high level of abstraction the driving issue. The question arises if an existing approach already fits our needs. The presented work casts light on these problems and as a result comprises the following novel contributions: - Application of object technology at hardware/software boundary. Software components at this level must be characterised by high efficiency and extensibility at the same time. - Identification of limitations when deploying commercial-off-the-shelf middleware for distributed object-oriented computing. - Capturing of software component properties in an efficiency model for ease of comparison and improvement. - Proof of feasibility that the encountered deficiencies in middleware can be avoided and that with the use of software components the imposed requirements can be met. - Design and implementation of an on-line software control system that allows to take into account the ever evolving requirements by avoiding hardwired policies. We conclude that state-of-the-art middleware cannot meet the required efficiency of the planned data acquisition system. Although new tool generations already provide a certain degree of configurability, the obligation to follow standards specifications does not allow the necessary optimisations. We identified the major limiting factors and argue that a custom solution following a component model with narrow interfaces can satisfy our requirements. This approach has been adopted for the current design

  10. Creation of system of computer-aided design for technological objects

    Science.gov (United States)

    Zubkova, T. M.; Tokareva, M. A.; Sultanov, N. Z.

    2018-05-01

    Due to the competition in the market of process equipment, its production should be flexible, retuning to various product configurations, raw materials and productivity, depending on the current market needs. This process is not possible without CAD (computer-aided design). The formation of CAD begins with planning. Synthesizing, analyzing, evaluating, converting operations, as well as visualization and decision-making operations, can be automated. Based on formal description of the design procedures, the design route in the form of an oriented graph is constructed. The decomposition of the design process, represented by the formalized description of the design procedures, makes it possible to make an informed choice of the CAD component for the solution of the task. The object-oriented approach allows us to consider the CAD as an independent system whose properties are inherited from the components. The first step determines the range of tasks to be performed by the system, and a set of components for their implementation. The second one is the configuration of the selected components. The interaction between the selected components is carried out using the CALS standards. The chosen CAD / CAE-oriented approach allows creating a single model, which is stored in the database of the subject area. Each of the integration stages is implemented as a separate functional block. The transformation of the CAD model into the model of the internal representation is realized by the block of searching for the geometric parameters of the technological machine, in which the XML-model of the construction is obtained on the basis of the feature method from the theory of image recognition. The configuration of integrated components is divided into three consecutive steps: configuring tasks, components, interfaces. The configuration of the components is realized using the theory of "soft computations" using the Mamdani fuzzy inference algorithm.

  11. An Evaluative Review of Simulated Dynamic Smart 3d Objects

    Science.gov (United States)

    Romeijn, H.; Sheth, F.; Pettit, C. J.

    2012-07-01

    Three-dimensional (3D) modelling of plants can be an asset for creating agricultural based visualisation products. The continuum of 3D plants models ranges from static to dynamic objects, also known as smart 3D objects. There is an increasing requirement for smarter simulated 3D objects that are attributed mathematically and/or from biological inputs. A systematic approach to plant simulation offers significant advantages to applications in agricultural research, particularly in simulating plant behaviour and the influences of external environmental factors. This approach of 3D plant object visualisation is primarily evident from the visualisation of plants using photographed billboarded images, to more advanced procedural models that come closer to simulating realistic virtual plants. However, few programs model physical reactions of plants to external factors and even fewer are able to grow plants based on mathematical and/or biological parameters. In this paper, we undertake an evaluation of plant-based object simulation programs currently available, with a focus upon the components and techniques involved in producing these objects. Through an analytical review process we consider the strengths and weaknesses of several program packages, the features and use of these programs and the possible opportunities in deploying these for creating smart 3D plant-based objects to support agricultural research and natural resource management. In creating smart 3D objects the model needs to be informed by both plant physiology and phenology. Expert knowledge will frame the parameters and procedures that will attribute the object and allow the simulation of dynamic virtual plants. Ultimately, biologically smart 3D virtual plants that react to changes within an environment could be an effective medium to visually represent landscapes and communicate land management scenarios and practices to planners and decision-makers.

  12. Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.

    Science.gov (United States)

    Spoerer, Courtney J; McClure, Patrick; Kriegeskorte, Nikolaus

    2017-01-01

    Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.

  13. Sparse principal component analysis in medical shape modeling

    Science.gov (United States)

    Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus

    2006-03-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.

  14. Remote object authentication: confidence model, cryptosystem and protocol

    Science.gov (United States)

    Lancrenon, Jean; Gillard, Roland; Fournel, Thierry

    2009-04-01

    This paper follows a paper by Bringer et al.3 to adapt a security model and protocol used for remote biometric authentication to the case of remote morphometric object authentication. We use a different type of encryption technique that requires smaller key sizes and has a built-in mechanism to help control the integrity of the messages received by the server. We also describe the optical technology used to extract the morphometric templates.

  15. A recurrent neural model for proto-object based contour integration and figure-ground segregation.

    Science.gov (United States)

    Hu, Brian; Niebur, Ernst

    2017-12-01

    Visual processing of objects makes use of both feedforward and feedback streams of information. However, the nature of feedback signals is largely unknown, as is the identity of the neuronal populations in lower visual areas that receive them. Here, we develop a recurrent neural model to address these questions in the context of contour integration and figure-ground segregation. A key feature of our model is the use of grouping neurons whose activity represents tentative objects ("proto-objects") based on the integration of local feature information. Grouping neurons receive input from an organized set of local feature neurons, and project modulatory feedback to those same neurons. Additionally, inhibition at both the local feature level and the object representation level biases the interpretation of the visual scene in agreement with principles from Gestalt psychology. Our model explains several sets of neurophysiological results (Zhou et al. Journal of Neuroscience, 20(17), 6594-6611 2000; Qiu et al. Nature Neuroscience, 10(11), 1492-1499 2007; Chen et al. Neuron, 82(3), 682-694 2014), and makes testable predictions about the influence of neuronal feedback and attentional selection on neural responses across different visual areas. Our model also provides a framework for understanding how object-based attention is able to select both objects and the features associated with them.

  16. Improving Demographic Components of Integrated Assessment Models: The Effect of Changes in Population Composition by Household Characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Brian C. O' Neill

    2006-08-09

    This report describes results of the research project on "Improving Demographic Components of Integrated Assessment Models: The Effect of Changes in Population Composition by Household Characteristics". The overall objective of this project was to improve projections of energy demand and associated greenhouse gas emissions by taking into account demographic factors currently not incorporated in Integrated Assessment Models (IAMs) of global climate change. We proposed to examine the potential magnitude of effects on energy demand of changes in the composition of populations by household characteristics for three countries: the U.S., China, and Indonesia. For each country, we planned to analyze household energy use survey data to estimate relationships between household characteristics and energy use; develop a new set of detailed household projections for each country; and combine these analyses to produce new projections of energy demand illustrating the potential importance of consideration of households.

  17. Failure Predictions for VHTR Core Components using a Probabilistic Contiuum Damage Mechanics Model

    Energy Technology Data Exchange (ETDEWEB)

    Fok, Alex

    2013-10-30

    The proposed work addresses the key research need for the development of constitutive models and overall failure models for graphite and high temperature structural materials, with the long-term goal being to maximize the design life of the Next Generation Nuclear Plant (NGNP). To this end, the capability of a Continuum Damage Mechanics (CDM) model, which has been used successfully for modeling fracture of virgin graphite, will be extended as a predictive and design tool for the core components of the very high- temperature reactor (VHTR). Specifically, irradiation and environmental effects pertinent to the VHTR will be incorporated into the model to allow fracture of graphite and ceramic components under in-reactor conditions to be modeled explicitly using the finite element method. The model uses a combined stress-based and fracture mechanics-based failure criterion, so it can simulate both the initiation and propagation of cracks. Modern imaging techniques, such as x-ray computed tomography and digital image correlation, will be used during material testing to help define the baseline material damage parameters. Monte Carlo analysis will be performed to address inherent variations in material properties, the aim being to reduce the arbitrariness and uncertainties associated with the current statistical approach. The results can potentially contribute to the current development of American Society of Mechanical Engineers (ASME) codes for the design and construction of VHTR core components.

  18. The implementation of common object request broker architecture (CORBA) for controlling robot arm via web

    International Nuclear Information System (INIS)

    Syed Mahamad Zuhdi Amin; Mohd Yazid Idris; Wan Mohd Nasir Wan Kadir

    2001-01-01

    This paper presents the employment of the Common Object Request Broker Architecture (CORBA) technology in the implementation of our distributed Arm Robot Controller (ARC). CORBA is an industrial standard architecture based on distributed abstract object model, which is developed by Object Management Group (OMG). The architecture consists of five components i.e. Object Request Broker (ORB), Interface Definition Language (IDL), Dynamic Invocation Interface (DII), Interface Repositories (IR) and Object adapter (OA). CORBA objects are different from typical programming objects in three ways i.e. they can be executed on any platform, located anywhere on the network and written in any language that supports IDL mapping. In the implementation of the system, 5 degree of freedom (DOF) arm robot RCS 6.0 and Java as a programming mapping to the CORBA IDL. By implementing this architecture, the objects in the server machine can be distributed over the network in order to run the controller. the ultimate goal for our ARC system is to demonstrate concurrent execution of multiple arm robots through multiple instantiations of distributed object components. (Author)

  19. Modeling money demand components in Lebanon using autoregressive models

    International Nuclear Information System (INIS)

    Mourad, M.

    2008-01-01

    This paper analyses monetary aggregate in Lebanon and its different component methodology of AR model. Thirteen variables in monthly data have been studied for the period January 1990 through December 2005. Using the Augmented Dickey-Fuller (ADF) procedure, twelve variables are integrated at order 1, thus they need the filter (1-B)) to become stationary, however the variable X 1 3,t (claims on private sector) becomes stationary with the filter (1-B)(1-B 1 2) . The ex-post forecasts have been calculated for twelve horizons and for one horizon (one-step ahead forecast). The quality of forecasts has been measured using the MAPE criterion for which the forecasts are good because the MAPE values are lower. Finally, a pursuit of this research using the cointegration approach is proposed. (author)

  20. Multi objective decision making in hybrid energy system design

    Science.gov (United States)

    Merino, Gabriel Guillermo

    The design of grid-connected photovoltaic wind generator system supplying a farmstead in Nebraska has been undertaken in this dissertation. The design process took into account competing criteria that motivate the use of different sources of energy for electric generation. The criteria considered were 'Financial', 'Environmental', and 'User/System compatibility'. A distance based multi-objective decision making methodology was developed to rank design alternatives. The method is based upon a precedence order imposed upon the design objectives and a distance metric describing the performance of each alternative. This methodology advances previous work by combining ambiguous information about the alternatives with a decision-maker imposed precedence order in the objectives. Design alternatives, defined by the photovoltaic array and wind generator installed capacities, were analyzed using the multi-objective decision making approach. The performance of the design alternatives was determined by simulating the system using hourly data for an electric load for a farmstead and hourly averages of solar irradiation, temperature and wind speed from eight wind-solar energy monitoring sites in Nebraska. The spatial variability of the solar energy resource within the region was assessed by determining semivariogram models to krige hourly and daily solar radiation data. No significant difference was found in the predicted performance of the system when using kriged solar radiation data, with the models generated vs. using actual data. The spatial variability of the combined wind and solar energy resources was included in the design analysis by using fuzzy numbers and arithmetic. The best alternative was dependent upon the precedence order assumed for the main criteria. Alternatives with no PV array or wind generator dominated when the 'Financial' criteria preceded the others. In contrast, alternatives with a nil component of PV array but a high wind generator component

  1. A Statistical Model for Generating a Population of Unclassified Objects and Radiation Signatures Spanning Nuclear Threats

    International Nuclear Information System (INIS)

    Nelson, K.; Sokkappa, P.

    2008-01-01

    This report describes an approach for generating a simulated population of plausible nuclear threat radiation signatures spanning a range of variability that could be encountered by radiation detection systems. In this approach, we develop a statistical model for generating random instances of smuggled nuclear material. The model is based on physics principles and bounding cases rather than on intelligence information or actual threat device designs. For this initial stage of work, we focus on random models using fissile material and do not address scenarios using non-fissile materials. The model has several uses. It may be used as a component in a radiation detection system performance simulation to generate threat samples for injection studies. It may also be used to generate a threat population to be used for training classification algorithms. In addition, we intend to use this model to generate an unclassified 'benchmark' threat population that can be openly shared with other organizations, including vendors, for use in radiation detection systems performance studies and algorithm development and evaluation activities. We assume that a quantity of fissile material is being smuggled into the country for final assembly and that shielding may have been placed around the fissile material. In terms of radiation signature, a nuclear weapon is basically a quantity of fissile material surrounded by various layers of shielding. Thus, our model of smuggled material is expected to span the space of potential nuclear weapon signatures as well. For computational efficiency, we use a generic 1-dimensional spherical model consisting of a fissile material core surrounded by various layers of shielding. The shielding layers and their configuration are defined such that the model can represent the potential range of attenuation and scattering that might occur. The materials in each layer and the associated parameters are selected from probability distributions that span the

  2. Fuzzy-like multiple objective multistage decision making

    CERN Document Server

    Xu, Jiuping

    2014-01-01

    Decision has inspired reflection of many thinkers since the ancient times. With the rapid development of science and society, appropriate dynamic decision making has been playing an increasingly important role in many areas of human activity including engineering, management, economy and others. In most real-world problems, decision makers usually have to make decisions sequentially at different points in time and space, at different levels for a component or a system, while facing multiple and conflicting objectives and a hybrid uncertain environment where fuzziness and randomness co-exist in a decision making process. This leads to the development of fuzzy-like multiple objective multistage decision making. This book provides a thorough understanding of the concepts of dynamic optimization from a modern perspective and presents the state-of-the-art methodology for modeling, analyzing and solving the most typical multiple objective multistage decision making practical application problems under fuzzy-like un...

  3. Use of Annotations for Component and Framework Interoperability

    Science.gov (United States)

    David, O.; Lloyd, W.; Carlson, J.; Leavesley, G. H.; Geter, F.

    2009-12-01

    The popular programming languages Java and C# provide annotations, a form of meta-data construct. Software frameworks for web integration, web services, database access, and unit testing now take advantage of annotations to reduce the complexity of APIs and the quantity of integration code between the application and framework infrastructure. Adopting annotation features in frameworks has been observed to lead to cleaner and leaner application code. The USDA Object Modeling System (OMS) version 3.0 fully embraces the annotation approach and additionally defines a meta-data standard for components and models. In version 3.0 framework/model integration previously accomplished using API calls is now achieved using descriptive annotations. This enables the framework to provide additional functionality non-invasively such as implicit multithreading, and auto-documenting capabilities while achieving a significant reduction in the size of the model source code. Using a non-invasive methodology leads to models and modeling components with only minimal dependencies on the modeling framework. Since models and modeling components are not directly bound to framework by the use of specific APIs and/or data types they can more easily be reused both within the framework as well as outside of it. To study the effectiveness of an annotation based framework approach with other modeling frameworks, a framework-invasiveness study was conducted to evaluate the effects of framework design on model code quality. A monthly water balance model was implemented across several modeling frameworks and several software metrics were collected. The metrics selected were measures of non-invasive design methods for modeling frameworks from a software engineering perspective. It appears that the use of annotations positively impacts several software quality measures. In a next step, the PRMS model was implemented in OMS 3.0 and is currently being implemented for water supply forecasting in the

  4. Hierarchical modeling of systems with similar components: A framework for adaptive monitoring and control

    International Nuclear Information System (INIS)

    Memarzadeh, Milad; Pozzi, Matteo; Kolter, J. Zico

    2016-01-01

    System management includes the selection of maintenance actions depending on the available observations: when a system is made up by components known to be similar, data collected on one is also relevant for the management of others. This is typically the case of wind farms, which are made up by similar turbines. Optimal management of wind farms is an important task due to high cost of turbines' operation and maintenance: in this context, we recently proposed a method for planning and learning at system-level, called PLUS, built upon the Partially Observable Markov Decision Process (POMDP) framework, which treats transition and emission probabilities as random variables, and is therefore suitable for including model uncertainty. PLUS models the components as independent or identical. In this paper, we extend that formulation, allowing for a weaker similarity among components. The proposed approach, called Multiple Uncertain POMDP (MU-POMDP), models the components as POMDPs, and assumes the corresponding parameters as dependent random variables. Through this framework, we can calibrate specific degradation and emission models for each component while, at the same time, process observations at system-level. We compare the performance of the proposed MU-POMDP with PLUS, and discuss its potential and computational complexity. - Highlights: • A computational framework is proposed for adaptive monitoring and control. • It adopts a scheme based on Markov Chain Monte Carlo for inference and learning. • Hierarchical Bayesian modeling is used to allow a system-level flow of information. • Results show potential of significant savings in management of wind farms.

  5. A new model for predicting thermodynamic properties of ternary metallic solution from binary components

    International Nuclear Information System (INIS)

    Fang Zheng; Zhang Quanru

    2006-01-01

    A model has been derived to predict thermodynamic properties of ternary metallic systems from those of its three binaries. In the model, the excess Gibbs free energies and the interaction parameter ω 123 for three components of a ternary are expressed as a simple sum of those of the three sub-binaries, and the mole fractions of the components of the ternary are identical with the sub-binaries. This model is greatly simplified compared with the current symmetrical and asymmetrical models. It is able to overcome some shortcomings of the current models, such as the arrangement of the components in the Gibbs triangle, the conversion of mole fractions between ternary and corresponding binaries, and some necessary processes for optimizing the various parameters of these models. Two ternary systems, Mg-Cu-Ni and Cd-Bi-Pb are recalculated to demonstrate the validity and precision of the present model. The calculated results on the Mg-Cu-Ni system are better than those in the literature. New parameters in the Margules equations expressing the excess Gibbs free energies of three binary systems of the Cd-Bi-Pb ternary system are also given

  6. Interactive object modelling based on piecewise planar surface patches.

    Science.gov (United States)

    Prankl, Johann; Zillich, Michael; Vincze, Markus

    2013-06-01

    Detecting elements such as planes in 3D is essential to describe objects for applications such as robotics and augmented reality. While plane estimation is well studied, table-top scenes exhibit a large number of planes and methods often lock onto a dominant plane or do not estimate 3D object structure but only homographies of individual planes. In this paper we introduce MDL to the problem of incrementally detecting multiple planar patches in a scene using tracked interest points in image sequences. Planar patches are reconstructed and stored in a keyframe-based graph structure. In case different motions occur, separate object hypotheses are modelled from currently visible patches and patches seen in previous frames. We evaluate our approach on a standard data set published by the Visual Geometry Group at the University of Oxford [24] and on our own data set containing table-top scenes. Results indicate that our approach significantly improves over the state-of-the-art algorithms.

  7. Interactive object modelling based on piecewise planar surface patches☆

    Science.gov (United States)

    Prankl, Johann; Zillich, Michael; Vincze, Markus

    2013-01-01

    Detecting elements such as planes in 3D is essential to describe objects for applications such as robotics and augmented reality. While plane estimation is well studied, table-top scenes exhibit a large number of planes and methods often lock onto a dominant plane or do not estimate 3D object structure but only homographies of individual planes. In this paper we introduce MDL to the problem of incrementally detecting multiple planar patches in a scene using tracked interest points in image sequences. Planar patches are reconstructed and stored in a keyframe-based graph structure. In case different motions occur, separate object hypotheses are modelled from currently visible patches and patches seen in previous frames. We evaluate our approach on a standard data set published by the Visual Geometry Group at the University of Oxford [24] and on our own data set containing table-top scenes. Results indicate that our approach significantly improves over the state-of-the-art algorithms. PMID:24511219

  8. A discrimination-association model for decomposing component processes of the implicit association test.

    Science.gov (United States)

    Stefanutti, Luca; Robusto, Egidio; Vianello, Michelangelo; Anselmi, Pasquale

    2013-06-01

    A formal model is proposed that decomposes the implicit association test (IAT) effect into three process components: stimuli discrimination, automatic association, and termination criterion. Both response accuracy and reaction time are considered. Four independent and parallel Poisson processes, one for each of the four label categories of the IAT, are assumed. The model parameters are the rate at which information accrues on the counter of each process and the amount of information that is needed before a response is given. The aim of this study is to present the model and an illustrative application in which the process components of a Coca-Pepsi IAT are decomposed.

  9. On hierarchical models for visual recognition and learning of objects, scenes, and activities

    CERN Document Server

    Spehr, Jens

    2015-01-01

    In many computer vision applications, objects have to be learned and recognized in images or image sequences. This book presents new probabilistic hierarchical models that allow an efficient representation of multiple objects of different categories, scales, rotations, and views. The idea is to exploit similarities between objects and object parts in order to share calculations and avoid redundant information. Furthermore inference approaches for fast and robust detection are presented. These new approaches combine the idea of compositional and similarity hierarchies and overcome limitations of previous methods. Besides classical object recognition the book shows the use for detection of human poses in a project for gait analysis. The use of activity detection is presented for the design of environments for ageing, to identify activities and behavior patterns in smart homes. In a presented project for parking spot detection using an intelligent vehicle, the proposed approaches are used to hierarchically model...

  10. Revealing the equivalence of two clonal survival models by principal component analysis

    International Nuclear Information System (INIS)

    Lachet, Bernard; Dufour, Jacques

    1976-01-01

    The principal component analysis of 21 chlorella cell survival curves, adjusted by one-hit and two-hit target models, lead to quite similar projections on the principal plan: the homologous parameters of these models are linearly correlated; the reason for the statistical equivalence of these two models, in the present state of experimental inaccuracy, is revealed [fr

  11. Space-time latent component Modeling of Geo-referenced health data

    OpenAIRE

    Lawson, Andrew B.; Song, Hae-Ryoung; Cai, Bo; Hossain, Md Monir; Huang, Kun

    2010-01-01

    Latent structure models have been proposed in many applications. For space time health data it is often important to be able to find underlying trends in time which are supported by subsets of small areas. Latent structure modeling is one approach to this analysis. This paper presents a mixture-based approach that can be appied to component selction. The analysis of a Georgia ambulatory asthma county level data set is presented and a simulation-based evaluation is made.

  12. An Object-oriented Knowledge Link Model for General Knowledge Management

    OpenAIRE

    Xiao-hong, CHEN; Bang-chuan, LAI

    2005-01-01

    The knowledge link is the basic on knowledge share and the indispensable part in knowledge standardization management. In this paper, a object-oriented knowledge link model is proposed for general knowledge management by using objectoriented representation based on knowledge levels system. In the model, knowledge link is divided into general knowledge link and integrated knowledge with corresponding link properties and methods. What’s more, its BNF syntax is described and designed.

  13. A Methodology for Modeling Nuclear Power Plant Passive Component Aging in Probabilistic Risk Assessment under the Impact of Operating Conditions, Surveillance and Maintenance Activities

    Science.gov (United States)

    Guler Yigitoglu, Askin

    In the context of long operation of nuclear power plants (NPPs) (i.e., 60-80 years, and beyond), investigation of the aging of passive systems, structures and components (SSCs) is important to assess safety margins and to decide on reactor life extension as indicated within the U.S. Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) Program. In the traditional probabilistic risk assessment (PRA) methodology, evaluating the potential significance of aging of passive SSCs on plant risk is challenging. Although passive SSC failure rates can be added as initiating event frequencies or basic event failure rates in the traditional event-tree/fault-tree methodology, these failure rates are generally based on generic plant failure data which means that the true state of a specific plant is not reflected in a realistic manner on aging effects. Dynamic PRA methodologies have gained attention recently due to their capability to account for the plant state and thus address the difficulties in the traditional PRA modeling of aging effects of passive components using physics-based models (and also in the modeling of digital instrumentation and control systems). Physics-based models can capture the impact of complex aging processes (e.g., fatigue, stress corrosion cracking, flow-accelerated corrosion, etc.) on SSCs and can be utilized to estimate passive SSC failure rates using realistic NPP data from reactor simulation, as well as considering effects of surveillance and maintenance activities. The objectives of this dissertation are twofold: The development of a methodology for the incorporation of aging modeling of passive SSC into a reactor simulation environment to provide a framework for evaluation of their risk contribution in both the dynamic and traditional PRA; and the demonstration of the methodology through its application to pressurizer surge line pipe weld and steam generator tubes in commercial nuclear power plants. In the proposed methodology, a

  14. A new model for the redundancy allocation problem with component mixing and mixed redundancy strategy

    International Nuclear Information System (INIS)

    Gholinezhad, Hadi; Zeinal Hamadani, Ali

    2017-01-01

    This paper develops a new model for redundancy allocation problem. In this paper, like many recent papers, the choice of the redundancy strategy is considered as a decision variable. But, in our model each subsystem can exploit both active and cold-standby strategies simultaneously. Moreover, the model allows for component mixing such that components of different types may be used in each subsystem. The problem, therefore, boils down to determining the types of components, redundancy levels, and number of active and cold-standby units of each type for each subsystem to maximize system reliability by considering such constraints as available budget, weight, and space. Since RAP belongs to the NP-hard class of optimization problems, a genetic algorithm (GA) is developed for solving the problem. Finally, the performance of the proposed algorithm is evaluated by applying it to a well-known test problem from the literature with relatively satisfactory results. - Highlights: • A new model for the redundancy allocation problem in series–parallel systems is proposed. • The redundancy strategy of each subsystem is considered as a decision variable and can be active, cold-standby or mixed. • Component mixing is allowed, in other words components of any subsystem can be non-identical. • A genetic algorithm is developed for solving the problem. • Computational experiments demonstrate that the new model leads to interesting results.

  15. Blind source separation based on time-frequency morphological characteristics for rigid acoustic scattering by underwater objects

    Science.gov (United States)

    Yang, Yang; Li, Xiukun

    2016-06-01

    Separation of the components of rigid acoustic scattering by underwater objects is essential in obtaining the structural characteristics of such objects. To overcome the problem of rigid structures appearing to have the same spectral structure in the time domain, time-frequency Blind Source Separation (BSS) can be used in combination with image morphology to separate the rigid scattering components of different objects. Based on a highlight model, the separation of the rigid scattering structure of objects with time-frequency distribution is deduced. Using a morphological filter, different characteristics in a Wigner-Ville Distribution (WVD) observed for single auto term and cross terms can be simplified to remove any cross-term interference. By selecting time and frequency points of the auto terms signal, the accuracy of BSS can be improved. An experimental simulation has been used, with changes in the pulse width of the transmitted signal, the relative amplitude and the time delay parameter, in order to analyzing the feasibility of this new method. Simulation results show that the new method is not only able to separate rigid scattering components, but can also separate the components when elastic scattering and rigid scattering exist at the same time. Experimental results confirm that the new method can be used in separating the rigid scattering structure of underwater objects.

  16. Personalised learning object based on multi-agent model and learners’ learning styles

    Directory of Open Access Journals (Sweden)

    Noppamas Pukkhem

    2011-09-01

    Full Text Available A multi-agent model is proposed in which learning styles and a word analysis technique to create a learning object recommendation system are used. On the basis of a learning style-based design, a concept map combination model is proposed to filter out unsuitable learning concepts from a given course. Our learner model classifies learners into eight styles and implements compatible computational methods consisting of three recommendations: i non-personalised, ii preferred feature-based, and iii neighbour-based collaborative filtering. The analysis of preference error (PE was performed by comparing the actual preferred learning object with the predicted one. In our experiments, the feature-based recommendation algorithm has the fewest PE.

  17. A review of multi-component maintenance models with economic dependence

    NARCIS (Netherlands)

    R. Dekker (Rommert); R.E. Wildeman (Ralph); F.A. van der Duyn Schouten (Frank)

    1997-01-01

    textabstractIn this paper we review the literature on multi-component maintenance models with economic dependence. The emphasis is on papers that appeared after 1991, but there is an overlap with Section 2 of the most recent review paper by Cho and Parlar (1991). We distinguish between stationary

  18. Exploiting object constancy: effects of active exploration and shape morphing on similarity judgments of novel objects.

    Science.gov (United States)

    Lee, Haemy; Wallraven, Christian

    2013-03-01

    Humans are experts at shape processing. This expertise has been learned and fine tuned by actively manipulating and perceiving thousands of objects during development. Therefore, shape processing possesses an active component and a perceptual component. Here, we investigate both components in six experiments in which participants view and/or interact with novel, parametrically defined 3D objects using a touch-screen interface. For probing shape processing, we use a similarity rating task. In Experiments 1-3, we show that active manipulation leads to a better perceptual reconstruction of the physical parameter space than judging rotating objects, or passively viewing someone else's exploration pattern. In Experiment 4, we exploit object constancy-the fact that the visual system assumes that objects do not change their identity during manipulation. We show that slow morphing of an object during active manipulation systematically biases similarity ratings-despite the participants being unaware of the morphing. Experiments 5 and 6 investigate the time course of integrating shape information by restricting the morphing to the first and second half of the trial only. Interestingly, the results indicate that participants do not seem to integrate shape information beyond 5 s of exploration time. Finally, Experiment 7 uses a secondary task that suggests that the previous results are not simply due to lack of attention during the later parts of the trial. In summary, our results demonstrate the advantage of active manipulation for shape processing and indicate a continued, perceptual integration of complex shape information within a time window of a few seconds during object interactions.

  19. Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network.

    Science.gov (United States)

    Li, Na; Zhao, Xinbo; Yang, Yongjia; Zou, Xiaochun

    2016-01-01

    Humans can easily classify different kinds of objects whereas it is quite difficult for computers. As a hot and difficult problem, objects classification has been receiving extensive interests with broad prospects. Inspired by neuroscience, deep learning concept is proposed. Convolutional neural network (CNN) as one of the methods of deep learning can be used to solve classification problem. But most of deep learning methods, including CNN, all ignore the human visual information processing mechanism when a person is classifying objects. Therefore, in this paper, inspiring the completed processing that humans classify different kinds of objects, we bring forth a new classification method which combines visual attention model and CNN. Firstly, we use the visual attention model to simulate the processing of human visual selection mechanism. Secondly, we use CNN to simulate the processing of how humans select features and extract the local features of those selected areas. Finally, not only does our classification method depend on those local features, but also it adds the human semantic features to classify objects. Our classification method has apparently advantages in biology. Experimental results demonstrated that our method made the efficiency of classification improve significantly.

  20. Fuzzy Goal Programming Approach in Selective Maintenance Reliability Model

    Directory of Open Access Journals (Sweden)

    Neha Gupta

    2013-12-01

    Full Text Available 800x600 In the present paper, we have considered the allocation problem of repairable components for a parallel-series system as a multi-objective optimization problem and have discussed two different models. In first model the reliability of subsystems are considered as different objectives. In second model the cost and time spent on repairing the components are considered as two different objectives. These two models is formulated as multi-objective Nonlinear Programming Problem (MONLPP and a Fuzzy goal programming method is used to work out the compromise allocation in multi-objective selective maintenance reliability model in which we define the membership functions of each objective function and then transform membership functions into equivalent linear membership functions by first order Taylor series and finally by forming a fuzzy goal programming model obtain a desired compromise allocation of maintenance components. A numerical example is also worked out to illustrate the computational details of the method.  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4