WorldWideScience

Sample records for modeling graphics objects

  1. Graphic design as urban design: towards a theory for analysing graphic objects in urban environments.

    OpenAIRE

    2011-01-01

    This thesis presents a model for analysing the graphic object as urban object, by considering atypical fields of discourse that contribute to the formation of the object domain. The question: what is graphic design as urban design? directs the research through an epistemological design study comprising: an interrogation of graphic design studio practice and the articulation of graphic design research questions; a review and subsequent development of research strategy, design and method toward...

  2. Graphical Models with R

    DEFF Research Database (Denmark)

    Højsgaard, Søren; Edwards, David; Lauritzen, Steffen

    , the book provides examples of how more advanced aspects of graphical modeling can be represented and handled within R. Topics covered in the seven chapters include graphical models for contingency tables, Gaussian and mixed graphical models, Bayesian networks and modeling high dimensional data...

  3. Grid OCL : A Graphical Object Connecting Language

    Science.gov (United States)

    Taylor, I. J.; Schutz, B. F.

    In this paper, we present an overview of the Grid OCL graphical object connecting language. Grid OCL is an extension of Grid, introduced last year, that allows users to interactively build complex data processing systems by selecting a set of desired tools and connecting them together graphically. Algorithms written in this way can now also be run outside the graphical environment.

  4. Graphical Models with R

    DEFF Research Database (Denmark)

    Højsgaard, Søren; Edwards, David; Lauritzen, Steffen

    Graphical models in their modern form have been around since the late 1970s and appear today in many areas of the sciences. Along with the ongoing developments of graphical models, a number of different graphical modeling software programs have been written over the years. In recent years many...... of these software developments have taken place within the R community, either in the form of new packages or by providing an R ingerface to existing software. This book attempts to give the reader a gentle introduction to graphical modeling using R and the main features of some of these packages. In addition......, the book provides examples of how more advanced aspects of graphical modeling can be represented and handled within R. Topics covered in the seven chapters include graphical models for contingency tables, Gaussian and mixed graphical models, Bayesian networks and modeling high dimensional data...

  5. Introduction to Graphical Modelling

    CERN Document Server

    Scutari, Marco

    2010-01-01

    The aim of this chapter is twofold. In the first part we will provide a brief overview of the mathematical and statistical foundations of graphical models, along with their fundamental properties, estimation and basic inference procedures. In particular we will develop Markov networks (also known as Markov random fields) and Bayesian networks, which comprise most past and current literature on graphical models. In the second part we will review some applications of graphical models in systems biology.

  6. Graphical Models with R

    CERN Document Server

    Højsgaard, Søren; Lauritzen, Steffen

    2012-01-01

    Graphical models in their modern form have been around since the late 1970s and appear today in many areas of the sciences. Along with the ongoing developments of graphical models, a number of different graphical modeling software programs have been written over the years. In recent years many of these software developments have taken place within the R community, either in the form of new packages or by providing an R interface to existing software. This book attempts to give the reader a gentle introduction to graphical modeling using R and the main features of some of these packages. In add

  7. Graphical Modeling Language Tool

    NARCIS (Netherlands)

    Rumnit, M.

    2003-01-01

    The group of the faculty EE-Math-CS of the University of Twente is developing a graphical modeling language for specifying concurrency in software design. This graphical modeling language has a mathematical background based on the theorie of CSP. This language contains the power to create trustworth

  8. Bayesian Graphical Models

    DEFF Research Database (Denmark)

    Jensen, Finn Verner; Nielsen, Thomas Dyhre

    2016-01-01

    Mathematically, a Bayesian graphical model is a compact representation of the joint probability distribution for a set of variables. The most frequently used type of Bayesian graphical models are Bayesian networks. The structural part of a Bayesian graphical model is a graph consisting of nodes...... is largely due to the availability of efficient inference algorithms for answering probabilistic queries about the states of the variables in the network. Furthermore, to support the construction of Bayesian network models, learning algorithms are also available. We give an overview of the Bayesian network...

  9. Object-oriented graphics programming in C++

    CERN Document Server

    Stevens, Roger T

    2014-01-01

    Object-Oriented Graphics Programming in C++ provides programmers with the information needed to produce realistic pictures on a PC monitor screen.The book is comprised of 20 chapters that discuss the aspects of graphics programming in C++. The book starts with a short introduction discussing the purpose of the book. It also includes the basic concepts of programming in C++ and the basic hardware requirement. Subsequent chapters cover related topics in C++ programming such as the various display modes; displaying TGA files, and the vector class. The text also tackles subjects on the processing

  10. Grid : A Graphical Object Connecting Language

    Science.gov (United States)

    Taylor, Ian; Schutz, Bernard

    1997-08-01

    Signal-processing systems are fast becoming an essential tool within the scientific community. This is primarily due to the need for constructing large complex algorithms which would take many hours of work to code using conventional programming languages. Grid OCL (Object Connecting Language) is a graphical interactive multi-threaded environment allowing users to create complex algorithms by creating a flexible object-oriented block diagram of the analysis required. Algorithms can be run incrementally or continuously e.g. when analysing long data streams. Grid (1.0 alpha) is written in Java (about 37,000 lines of code) and is currently distributed free of charge over the WWW. Although originally developed to analyse data taken from the GEO 600 gravitational wave detector, its use has been much more widespread e.g. collaborators are working on tools for various signal and image problems, multimedia teaching aids and even to construct a musical composition system.

  11. Graphical models for genetic analyses

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt; Sheehan, Nuala A.

    2003-01-01

    This paper introduces graphical models as a natural environment in which to formulate and solve problems in genetics and related areas. Particular emphasis is given to the relationships among various local computation algorithms which have been developed within the hitherto mostly separate areas...... of graphical models and genetics. The potential of graphical models is explored and illustrated through a number of example applications where the genetic element is substantial or dominating....

  12. Graphical models for genetic analyses

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt; Sheehan, Nuala A.

    2003-01-01

    This paper introduces graphical models as a natural environment in which to formulate and solve problems in genetics and related areas. Particular emphasis is given to the relationships among various local computation algorithms which have been developed within the hitherto mostly separate areas...

  13. Modelling structured data with Probabilistic Graphical Models

    Science.gov (United States)

    Forbes, F.

    2016-05-01

    Most clustering and classification methods are based on the assumption that the objects to be clustered are independent. However, in more and more modern applications, data are structured in a way that makes this assumption not realistic and potentially misleading. A typical example that can be viewed as a clustering task is image segmentation where the objects are the pixels on a regular grid and depend on neighbouring pixels on this grid. Also, when data are geographically located, it is of interest to cluster data with an underlying dependence structure accounting for some spatial localisation. These spatial interactions can be naturally encoded via a graph not necessarily regular as a grid. Data sets can then be modelled via Markov random fields and mixture models (e.g. the so-called MRF and Hidden MRF). More generally, probabilistic graphical models are tools that can be used to represent and manipulate data in a structured way while modeling uncertainty. This chapter introduces the basic concepts. The two main classes of probabilistic graphical models are considered: Bayesian networks and Markov networks. The key concept of conditional independence and its link to Markov properties is presented. The main problems that can be solved with such tools are described. Some illustrations are given associated with some practical work.

  14. Graphical Models for Bandit Problems

    CERN Document Server

    Amin, Kareem; Syed, Umar

    2012-01-01

    We introduce a rich class of graphical models for multi-armed bandit problems that permit both the state or context space and the action space to be very large, yet succinctly specify the payoffs for any context-action pair. Our main result is an algorithm for such models whose regret is bounded by the number of parameters and whose running time depends only on the treewidth of the graph substructure induced by the action space.

  15. An object oriented multi-robotic graphic simulation environment for programming the welding tasks

    Institute of Scientific and Technical Information of China (English)

    崔泽; 赵杰; 崔岩; 蔡鹤皋

    2002-01-01

    An object-oriented multi-robotic graphic simulation environment is described in this paper. Object-oriented programming is used to model the physical objects of the robotic workcell in the form of software objects or classes. The virtual objects are defined to provide the user with a user-friendly interface including realistic graphic simulation and clarify the software architecture. The programming method of associating the task object with active object effectively increases the software reusability, maintainability and modifiability. Task level programming is also demonstrated through a multi-robot welding task that allows the user to concentrate on the most important aspects of the tasks. The multi-thread programming technique is used to simulate the interaction of multiple tasks. Finally, a virtual test is carried out in the graphic simulation environment to observe design and program errors and fix them before downloading the software to the real workcell.

  16. A State Articulated Instructional Objectives Guide for Occupational Education Programs. State Pilot Model for Drafting (Graphic Communications). Part I--Basic. Part II--Specialty Programs. Section A (Mechanical Drafting and Design). Section B (Architectural Drafting and Design).

    Science.gov (United States)

    North Carolina State Dept. of Community Colleges, Raleigh.

    A two-part articulation instructional objective guide for drafting (graphic communications) is provided. Part I contains summary information on seven blocks (courses) of instruction. They are as follow: introduction; basic technical drafting; problem solving in graphics; reproduction processes; freehand drawing and sketching; graphics composition;…

  17. Graphical modelling software in R - status

    DEFF Research Database (Denmark)

    Dethlefsen, Claus; Højsgaard, Søren; Lauritzen, Steffen L.

    Graphical models in their modern form have been around for nearly a quarter of a century.  Various computer programs for inference in graphical models have been developed over that period. Some examples of free software programs are BUGS (Thomas 1994), CoCo (Badsberg2001), Digram (Klein, Keiding...

  18. Graphical Models and Computerized Adaptive Testing.

    Science.gov (United States)

    Mislevy, Robert J.; Almond, Russell G.

    This paper synthesizes ideas from the fields of graphical modeling and education testing, particularly item response theory (IRT) applied to computerized adaptive testing (CAT). Graphical modeling can offer IRT a language for describing multifaceted skills and knowledge, and disentangling evidence from complex performances. IRT-CAT can offer…

  19. Mastering probabilistic graphical models using Python

    CERN Document Server

    Ankan, Ankur

    2015-01-01

    If you are a researcher or a machine learning enthusiast, or are working in the data science field and have a basic idea of Bayesian learning or probabilistic graphical models, this book will help you to understand the details of graphical models and use them in your data science problems.

  20. Extended Bayesian Information Criteria for Gaussian Graphical Models

    CERN Document Server

    Foygel, Rina

    2010-01-01

    Gaussian graphical models with sparsity in the inverse covariance matrix are of significant interest in many modern applications. For the problem of recovering the graphical structure, information criteria provide useful optimization objectives for algorithms searching through sets of graphs or for selection of tuning parameters of other methods such as the graphical lasso, which is a likelihood penalization technique. In this paper we establish the consistency of an extended Bayesian information criterion for Gaussian graphical models in a scenario where both the number of variables p and the sample size n grow. Compared to earlier work on the regression case, our treatment allows for growth in the number of non-zero parameters in the true model, which is necessary in order to cover connected graphs. We demonstrate the performance of this criterion on simulated data when used in conjunction with the graphical lasso, and verify that the criterion indeed performs better than either cross-validation or the ordi...

  1. Interactive graphics for geometry modeling

    Science.gov (United States)

    Wozny, M. J.

    1984-01-01

    An interactive vector capability to create geometry and a raster color shaded rendering capability to sample and verify interim geometric design steps through color snapshots is described. The development is outlined of the underlying methodology which facilitates computer aided engineering and design. At present, raster systems cannot match the interactivity and line-drawing capability of refresh vector systems. Consequently, an intermediate step in mechanical design is used to create objects interactively on the vector display and then scan convert the wireframe model to render it as a color shaded object on a raster display. Several algorithms are presented for rendering such objects. Superquadric solid primitive extend the class of primitives normally used in solid modelers.

  2. Learning Undirected Graphical Models with Structure Penalty

    CERN Document Server

    Ding, Shilin

    2011-01-01

    In undirected graphical models, learning the graph structure and learning the functions that relate the predictive variables (features) to the responses given the structure are two topics that have been widely investigated in machine learning and statistics. Learning graphical models in two stages will have problems because graph structure may change after considering the features. The main contribution of this paper is the proposed method that learns the graph structure and functions on the graph at the same time. General graphical models with binary outcomes conditioned on predictive variables are proved to be equivalent to multivariate Bernoulli model. The reparameterization of the potential functions in graphical model by conditional log odds ratios in multivariate Bernoulli model offers advantage in the representation of the conditional independence structure in the model. Additionally, we impose a structure penalty on groups of conditional log odds ratios to learn the graph structure. These groups of fu...

  3. 基于概率图模型的文本对象情感分析%Object and Sentiment Analysis of Texts Based on Probabilistic Graphical Model

    Institute of Scientific and Technical Information of China (English)

    赵鸿艳; 王素格; 许超逸

    2014-01-01

    情感分析旨在从文本数据中自动识别主观情感,即文本中表达的观点、态度、感受等,在线评论通常都涉及特定的对象,通过在JST模型基础上加入对象层提出了一种无监督的对象情感联合模型(UOSU model),UOSU模型对每个词同时采样对象、情感和主题标签,最终得到各个主题的对象情感词以及文本的对象情感分布。在汽车评论数据集上进行的情感分类实验取得了74.19%的精确率和73.97%的召回率。%Sentiment analysis aims to automatically detect the subjective sentiment such as opinions, attitudes and feelings in texts. Online review usually relates to the specific object, in order to detect object information from texts, this paper proposes an unsupervised object and sentiment unification model (UOSU model), which has added a plate for object based on the JST model. UOSU model is fully unsupervised and detects object, sentiment and topic simultaneously from text. Using the UOSU model can achieve words generated by the specific object, sentiment and topic. Besides, the object and sentiment for a text can be obtained at the same time. The model is evaluated on the car review dataset to classify the review sentiment polarity, and the sentiment classification experiments achieves accuracy of 74.19%and recall rate of 73.97%.

  4. Accelerated space object tracking via graphic processing unit

    Science.gov (United States)

    Jia, Bin; Liu, Kui; Pham, Khanh; Blasch, Erik; Chen, Genshe

    2016-05-01

    In this paper, a hybrid Monte Carlo Gauss mixture Kalman filter is proposed for the continuous orbit estimation problem. Specifically, the graphic processing unit (GPU) aided Monte Carlo method is used to propagate the uncertainty of the estimation when the observation is not available and the Gauss mixture Kalman filter is used to update the estimation when the observation sequences are available. A typical space object tracking problem using the ground radar is used to test the performance of the proposed algorithm. The performance of the proposed algorithm is compared with the popular cubature Kalman filter (CKF). The simulation results show that the ordinary CKF diverges in 5 observation periods. In contrast, the proposed hybrid Monte Carlo Gauss mixture Kalman filter achieves satisfactory performance in all observation periods. In addition, by using the GPU, the computational time is over 100 times less than that using the conventional central processing unit (CPU).

  5. Graphical Model Debugger Framework for Embedded Systems

    DEFF Research Database (Denmark)

    Zeng, Kebin; Guo, Yu; Angelov, Christo K.

    2010-01-01

    Model Driven Software Development has offered a faster way to design and implement embedded real-time software by moving the design to a model level, and by transforming models to code. However, the testing of embedded systems has remained at the code level. This paper presents a Graphical Model...... Debugger Framework, providing an auxiliary avenue of analysis of system models at runtime by executing generated code and updating models synchronously, which allows embedded developers to focus on the model level. With the model debugger, embedded developers can graphically test their design model...

  6. Probabilistic graphical model representation in phylogenetics.

    Science.gov (United States)

    Höhna, Sebastian; Heath, Tracy A; Boussau, Bastien; Landis, Michael J; Ronquist, Fredrik; Huelsenbeck, John P

    2014-09-01

    Recent years have seen a rapid expansion of the model space explored in statistical phylogenetics, emphasizing the need for new approaches to statistical model representation and software development. Clear communication and representation of the chosen model is crucial for: (i) reproducibility of an analysis, (ii) model development, and (iii) software design. Moreover, a unified, clear and understandable framework for model representation lowers the barrier for beginners and nonspecialists to grasp complex phylogenetic models, including their assumptions and parameter/variable dependencies. Graphical modeling is a unifying framework that has gained in popularity in the statistical literature in recent years. The core idea is to break complex models into conditionally independent distributions. The strength lies in the comprehensibility, flexibility, and adaptability of this formalism, and the large body of computational work based on it. Graphical models are well-suited to teach statistical models, to facilitate communication among phylogeneticists and in the development of generic software for simulation and statistical inference. Here, we provide an introduction to graphical models for phylogeneticists and extend the standard graphical model representation to the realm of phylogenetics. We introduce a new graphical model component, tree plates, to capture the changing structure of the subgraph corresponding to a phylogenetic tree. We describe a range of phylogenetic models using the graphical model framework and introduce modules to simplify the representation of standard components in large and complex models. Phylogenetic model graphs can be readily used in simulation, maximum likelihood inference, and Bayesian inference using, for example, Metropolis-Hastings or Gibbs sampling of the posterior distribution.

  7. Item Screening in Graphical Loglinear Rasch Models

    DEFF Research Database (Denmark)

    Kreiner, Svend; Christensen, Karl Bang

    2011-01-01

    the initial item analysis may disclose a great deal of spurious and misleading evidence of DIF and local dependence that has to disposed of during the modelling procedure. Like graphical models, graphical loglinear Rasch models possess Markov properties that are useful during the statistical analysis......In behavioural sciences, local dependence and DIF are common, and purification procedures that eliminate items with these weaknesses often result in short scales with poor reliability. Graphical loglinear Rasch models (Kreiner & Christensen, in Statistical Methods for Quality of Life Studies, ed....... by M. Mesbah, F.C. Cole & M.T. Lee, Kluwer Academic, pp. 187–203, 2002) where uniform DIF and uniform local dependence are permitted solve this dilemma by modelling the local dependence and DIF. Identifying loglinear Rasch models by a stepwise model search is often very time consuming, since...

  8. Graphical Model Theory for Wireless Sensor Networks

    Energy Technology Data Exchange (ETDEWEB)

    Davis, William B.

    2002-12-08

    Information processing in sensor networks, with many small processors, demands a theory of computation that allows the minimization of processing effort, and the distribution of this effort throughout the network. Graphical model theory provides a probabilistic theory of computation that explicitly addresses complexity and decentralization for optimizing network computation. The junction tree algorithm, for decentralized inference on graphical probability models, can be instantiated in a variety of applications useful for wireless sensor networks, including: sensor validation and fusion; data compression and channel coding; expert systems, with decentralized data structures, and efficient local queries; pattern classification, and machine learning. Graphical models for these applications are sketched, and a model of dynamic sensor validation and fusion is presented in more depth, to illustrate the junction tree algorithm.

  9. Graphical Model Debugger Framework for Embedded Systems

    DEFF Research Database (Denmark)

    Zeng, Kebin

    2010-01-01

    Debugger Framework, providing an auxiliary avenue of analysis of system models at runtime by executing generated code and updating models synchronously, which allows embedded developers to focus on the model level. With the model debugger, embedded developers can graphically test their design model......Model Driven Software Development has offered a faster way to design and implement embedded real-time software by moving the design to a model level, and by transforming models to code. However, the testing of embedded systems has remained at the code level. This paper presents a Graphical Model...... and check the running status of the system, which offers a debugging capability on a higher level of abstraction. The framework intends to contribute a tool to the Eclipse society, especially suitable for model-driven development of embedded systems....

  10. Building probabilistic graphical models with Python

    CERN Document Server

    Karkera, Kiran R

    2014-01-01

    This is a short, practical guide that allows data scientists to understand the concepts of Graphical models and enables them to try them out using small Python code snippets, without being too mathematically complicated. If you are a data scientist who knows about machine learning and want to enhance your knowledge of graphical models, such as Bayes network, in order to use them to solve real-world problems using Python libraries, this book is for you. This book is intended for those who have some Python and machine learning experience, or are exploring the machine learning field.

  11. Transforming Graphical System Models to Graphical Attack Models

    DEFF Research Database (Denmark)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, Rene Rydhof;

    2016-01-01

    Manually identifying possible attacks on an organisation is a complex undertaking; many different factors must be considered, and the resulting attack scenarios can be complex and hard to maintain as the organisation changes. System models provide a systematic representation of organisations that...

  12. Transforming graphical system models to graphical attack models

    NARCIS (Netherlands)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof; Kammüller, Florian; Mauw, S.; Kordy, B.

    2015-01-01

    Manually identifying possible attacks on an organisation is a complex undertaking; many different factors must be considered, and the resulting attack scenarios can be complex and hard to maintain as the organisation changes. System models provide a systematic representation of organisations that he

  13. Planar graphical models which are easy

    Energy Technology Data Exchange (ETDEWEB)

    Chertkov, Michael [Los Alamos National Laboratory; Chernyak, Vladimir [WAYNE STATE UNIV

    2009-01-01

    We describe a rich family of binary variables statistical mechanics models on planar graphs which are equivalent to Gaussian Grassmann Graphical models (free fermions). Calculation of partition function (weighted counting) in the models is easy (of polynomial complexity) as reduced to evaluation of determinants of matrixes linear in the number of variables. In particular, this family of models covers Holographic Algorithms of Valiant and extends on the Gauge Transformations discussed in our previous works.

  14. A graphical approach to analogue behavioural modelling

    OpenAIRE

    Moser, Vincent; Nussbaum, Pascal; Amann, Hans-Peter; Astier, Luc; Pellandini, Fausto

    2007-01-01

    In order to master the growing complexity of analogue electronic systems, modelling and simulation of analogue hardware at various levels is absolutely necessary. This paper presents an original modelling method based on the graphical description of analogue electronic functional blocks. This method is intended to be automated and integrated into a design framework: specialists create behavioural models of existing functional blocks, that can then be used through high-level selection and spec...

  15. Image segmentation with a unified graphical model.

    Science.gov (United States)

    Zhang, Lei; Ji, Qiang

    2010-08-01

    We propose a unified graphical model that can represent both the causal and noncausal relationships among random variables and apply it to the image segmentation problem. Specifically, we first propose to employ Conditional Random Field (CRF) to model the spatial relationships among image superpixel regions and their measurements. We then introduce a multilayer Bayesian Network (BN) to model the causal dependencies that naturally exist among different image entities, including image regions, edges, and vertices. The CRF model and the BN model are then systematically and seamlessly combined through the theories of Factor Graph to form a unified probabilistic graphical model that captures the complex relationships among different image entities. Using the unified graphical model, image segmentation can be performed through a principled probabilistic inference. Experimental results on the Weizmann horse data set, on the VOC2006 cow data set, and on the MSRC2 multiclass data set demonstrate that our approach achieves favorable results compared to state-of-the-art approaches as well as those that use either the BN model or CRF model alone.

  16. GRAPHICAL MODELS OF THE AIRCRAFT MAINTENANCE PROCESS

    Directory of Open Access Journals (Sweden)

    Stanislav Vladimirovich Daletskiy

    2017-01-01

    Full Text Available The aircraft maintenance is realized by a rapid sequence of maintenance organizational and technical states, its re- search and analysis are carried out by statistical methods. The maintenance process concludes aircraft technical states con- nected with the objective patterns of technical qualities changes of the aircraft as a maintenance object and organizational states which determine the subjective organization and planning process of aircraft using. The objective maintenance pro- cess is realized in Maintenance and Repair System which does not include maintenance organization and planning and is a set of related elements: aircraft, Maintenance and Repair measures, executors and documentation that sets rules of their interaction for maintaining of the aircraft reliability and readiness for flight. The aircraft organizational and technical states are considered, their characteristics and heuristic estimates of connection in knots and arcs of graphs and of aircraft organi- zational states during regular maintenance and at technical state failure are given. It is shown that in real conditions of air- craft maintenance, planned aircraft technical state control and maintenance control through it, is only defined by Mainte- nance and Repair conditions at a given Maintenance and Repair type and form structures, and correspondingly by setting principles of Maintenance and Repair work types to the execution, due to maintenance, by aircraft and all its units mainte- nance and reconstruction strategies. The realization of planned Maintenance and Repair process determines the one of the constant maintenance component. The proposed graphical models allow to reveal quantitative correlations between graph knots to improve maintenance processes by statistical research methods, what reduces manning, timetable and expenses for providing safe civil aviation aircraft maintenance.

  17. Graphical model construction based on evolutionary algorithms

    Institute of Scientific and Technical Information of China (English)

    Youlong YANG; Yan WU; Sanyang LIU

    2006-01-01

    Using Bayesian networks to model promising solutions from the current population of the evolutionary algorithms can ensure efficiency and intelligence search for the optimum. However, to construct a Bayesian network that fits a given dataset is a NP-hard problem, and it also needs consuming mass computational resources. This paper develops a methodology for constructing a graphical model based on Bayesian Dirichlet metric. Our approach is derived from a set of propositions and theorems by researching the local metric relationship of networks matching dataset. This paper presents the algorithm to construct a tree model from a set of potential solutions using above approach. This method is important not only for evolutionary algorithms based on graphical models, but also for machine learning and data mining.The experimental results show that the exact theoretical results and the approximations match very well.

  18. Whole versus Part Presentations of the Interactive 3D Graphics Learning Objects

    Science.gov (United States)

    Azmy, Nabil Gad; Ismaeel, Dina Ahmed

    2010-01-01

    The purpose of this study is to present an analysis of how the structure and design of the Interactive 3D Graphics Learning Objects can be effective and efficient in terms of Performance, Time on task, and Learning Efficiency. The study explored two treatments, namely whole versus Part Presentations of the Interactive 3D Graphics Learning Objects,…

  19. Markov chain Monte Carlo methods in directed graphical models

    DEFF Research Database (Denmark)

    Højbjerre, Malene

    Directed graphical models present data possessing a complex dependence structure, and MCMC methods are computer-intensive simulation techniques to approximate high-dimensional intractable integrals, which emerge in such models with incomplete data. MCMC computations in directed graphical models...

  20. Graphical Models for Optimal Power Flow

    CERN Document Server

    Dvijotham, Krishnamurthy; Chertkov, Michael; Misra, Sidhant; Vuffray, Marc

    2016-01-01

    Optimal power flow (OPF) is the central optimization problem in electric power grids. Although solved routinely in the course of power grid operations, it is known to be strongly NP-hard in general, and weakly NP-hard over tree networks. In this paper, we formulate the optimal power flow problem over tree networks as an inference problem over a tree-structured graphical model where the nodal variables are low-dimensional vectors. We adapt the standard dynamic programming algorithm for inference over a tree-structured graphical model to the OPF problem. Combining this with an interval discretization of the nodal variables, we develop an approximation algorithm for the OPF problem. Further, we use techniques from constraint programming (CP) to perform interval computations and adaptive bound propagation to obtain practically efficient algorithms. Compared to previous algorithms that solve OPF with optimality guarantees using convex relaxations, our approach is able to work for arbitrary distribution networks an...

  1. Graphical representations of Ising and Potts models

    CERN Document Server

    Björnberg, Jakob E

    2010-01-01

    We study graphical representations for two related models. The first model is the transverse field quantum Ising model, an extension of the original Ising model which was introduced by Lieb, Schultz and Mattis in the 1960's. The second model is the space-time percolation process, which is closely related to the contact model for the spread of disease. We consider a `space-time' random-cluster model and explore a range of useful probabilistic techniques for studying it. The space-time Potts model emerges as a natural generalization of the quantum Ising model. The basic properties of the phase transitions in these models are treated, such as the fact that there is at most one unbounded FK-cluster, and the resulting lower bound on the critical value in $\\ZZ$. We also develop an alternative graphical representation of the quantum Ising model, called the random-parity representation. This representation is based on the random-current representation of the classical Ising model, and allows us to study in much great...

  2. GOAL: Towards Understanding of Graphic Objects from Architectural to Line Drawings

    Science.gov (United States)

    Pal, Shyamosree; Bhowmick, Partha; Biswas, Arindam; Bhattacharya, Bhargab B.

    Understanding of graphic objects has become a problem of pertinence in today's context of digital documentation and document digitization, since graphic information in a document image may be present in several forms, such as engineering drawings, architectural plans, musical scores, tables, charts, extended objects, hand-drawn sketches, etc. There exist quite a few approaches for segmentation of graphics from text, and also a separate set of techniques for recognizing a graphics and its characteristic features. This paper introduces a novel geometric algorithm that performs the task of segmenting out all the graphic objects in a document image and subsequently also works as a high-level tool to classify various graphic types. Given a document image, it performs the text-graphics segmentation by analyzing the geometric features of the minimum-area isothetic polygonal covers of all the objects for varying grid spacing, g. As the shape and size of a polygonal cover depends on g, and each isothetic polygon is represented by an ordered sequence of its vertices, the spatial relationship of the polygons corresponding to a higher grid spacing with those corresponding to a lower spacing, is used for graphics segmentation and subsequent classification. Experimental results demonstrate its efficiency, elegance, and versatility.

  3. Structure Learning of Probabilistic Graphical Models: A Comprehensive Survey

    CERN Document Server

    Zhou, Yang

    2011-01-01

    Probabilistic graphical models combine the graph theory and probability theory to give a multivariate statistical modeling. They provide a unified description of uncertainty using probability and complexity using the graphical model. Especially, graphical models provide the following several useful properties: - Graphical models provide a simple and intuitive interpretation of the structures of probabilistic models. On the other hand, they can be used to design and motivate new models. - Graphical models provide additional insights into the properties of the model, including the conditional independence properties. - Complex computations which are required to perform inference and learning in sophisticated models can be expressed in terms of graphical manipulations, in which the underlying mathematical expressions are carried along implicitly. The graphical models have been applied to a large number of fields, including bioinformatics, social science, control theory, image processing, marketing analysis, amon...

  4. Graphical models and point pattern matching.

    Science.gov (United States)

    Caetano, Tibério S; Caelli, Terry; Schuurmans, Dale; Barone, Dante A C

    2006-10-01

    This paper describes a novel solution to the rigid point pattern matching problem in Euclidean spaces of any dimension. Although we assume rigid motion, jitter is allowed. We present a noniterative, polynomial time algorithm that is guaranteed to find an optimal solution for the noiseless case. First, we model point pattern matching as a weighted graph matching problem, where weights correspond to Euclidean distances between nodes. We then formulate graph matching as a problem of finding a maximum probability configuration in a graphical model. By using graph rigidity arguments, we prove that a sparse graphical model yields equivalent results to the fully connected model in the noiseless case. This allows us to obtain an algorithm that runs in polynomial time and is provably optimal for exact matching between noiseless point sets. For inexact matching, we can still apply the same algorithm to find approximately optimal solutions. Experimental results obtained by our approach show improvements in accuracy over current methods, particularly when matching patterns of different sizes.

  5. Graphical Models Concepts in Compressed Sensing

    CERN Document Server

    Montanari, Andrea

    2010-01-01

    This paper surveys recent work in applying ideas from graphical models and message passing algorithms to solve large scale regularized regression problems. In particular, the focus is on compressed sensing reconstruction via $\\ell_1$ penalized least-squares (known as LASSO or BPDN). We discuss how to derive fast approximate message passing algorithms to solve this problem. Surprisingly, the analysis of such algorithms allows to prove exact high-dimensional limit results for the LASSO risk. This paper will appear as a chapter in a book on ‘Compressed Sensing’ edited by Yonina Eldar and Gitta Kutyniok.

  6. Method of formation of individual graphic style among students, studying projection objects of graphic design

    Directory of Open Access Journals (Sweden)

    Lyudmila Sinitsyna

    2015-04-01

    Full Text Available Projection of design objects is the main type of professional activity of designers, the creative almost un-predictable process. The skills of a designer are demanded by the society, when a certain designer at any time and without effort in a short period of time can develop a design object, which can easily be imple-mented in production, which will be in demand by a consumer for a long time and almost without special modifications. Each teacher of design thinks about the ways and methods of teaching the design stages. The author offers a way to get the skills of design based on the simultaneous execution of two tasks with different goals, objectives and method of images.

  7. Spectron: Graphical Model for Interacting With Timbre

    Directory of Open Access Journals (Sweden)

    Daniel Gómez

    2009-06-01

    Full Text Available The algorithms for creating and manipulating sound by electronic or digital means have grown in number and complexity since the creation of the first analog synthesizers. The techniques for visualizing these synthesis models have not increasingly grown with synthesizers, neither in hardware nor in software. In this paper, the possibilities to graphically represent and control timbre are presented, based on displaying the parameters involved in its synthesis model. A very simple data set was extracted from a commercial subtractive synthesizer and analyzed in two different approaches, dimensionality reduction and abstract data visualization. The results of these two different approaches were used as leads to design a synthesizer prototype: the Spectron synthesizer. This prototype uses an Amplitude vs. Frequency graphic as it´s main interface to give information about the timbre and to interact with it, it´s control offers a simplification in the amount of variables of a classic oscillator and expands its possibilities to generate additional timbre.

  8. Efficiently adapting graphical models for selectivity estimation

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2013-01-01

    of the selectivities of the constituent predicates. However, this independence assumption is more often than not wrong, and is considered to be the most common cause of sub-optimal query execution plans chosen by modern query optimizers. We take a step towards a principled and practical approach to performing...... cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss......Query optimizers rely on statistical models that succinctly describe the underlying data. Models are used to derive cardinality estimates for intermediate relations, which in turn guide the optimizer to choose the best query execution plan. The quality of the resulting plan is highly dependent...

  9. Object Oriented Modeling and Design

    Science.gov (United States)

    Shaykhian, Gholam Ali

    2007-01-01

    The Object Oriented Modeling and Design seminar is intended for software professionals and students, it covers the concepts and a language-independent graphical notation that can be used to analyze problem requirements, and design a solution to the problem. The seminar discusses the three kinds of object-oriented models class, state, and interaction. The class model represents the static structure of a system, the state model describes the aspects of a system that change over time as well as control behavior and the interaction model describes how objects collaborate to achieve overall results. Existing knowledge of object oriented programming may benefit the learning of modeling and good design. Specific expectations are: Create a class model, Read, recognize, and describe a class model, Describe association and link, Show abstract classes used with multiple inheritance, Explain metadata, reification and constraints, Group classes into a package, Read, recognize, and describe a state model, Explain states and transitions, Read, recognize, and describe interaction model, Explain Use cases and use case relationships, Show concurrency in activity diagram, Object interactions in sequence diagram.

  10. Formal Analysis of Graphical Security Models

    DEFF Research Database (Denmark)

    Aslanyan, Zaruhi

    The increasing usage of computer-based systems in almost every aspects of our daily life makes more and more dangerous the threat posed by potential attackers, and more and more rewarding a successful attack. Moreover, the complexity of these systems is also increasing, including physical devices......, software components and human actors interacting with each other to form so-called socio-technical systems. The importance of socio-technical systems to modern societies requires verifying their security properties formally, while their inherent complexity makes manual analyses impracticable. Graphical...... models for security offer an unrivalled opportunity to describe socio-technical systems, for they allow to represent different aspects like human behaviour, computation and physical phenomena in an abstract yet uniform manner. Moreover, these models can be assigned a formal semantics, thereby allowing...

  11. Lipschitz Parametrization of Probabilistic Graphical Models

    CERN Document Server

    Honorio, Jean

    2012-01-01

    We show that the log-likelihood of several probabilistic graphical models is Lipschitz continuous with respect to the lp-norm of the parameters. We discuss several implications of Lipschitz parametrization. We present an upper bound of the Kullback-Leibler divergence that allows understanding methods that penalize the lp-norm of differences of parameters as the minimization of that upper bound. The expected log-likelihood is lower bounded by the negative lp-norm, which allows understanding the generalization ability of probabilistic models. The exponential of the negative lp-norm is involved in the lower bound of the Bayes error rate, which shows that it is reasonable to use parameters as features in algorithms that rely on metric spaces (e.g. classification, dimensionality reduction, clustering). Our results do not rely on specific algorithms for learning the structure or parameters. We show preliminary results for activity recognition and temporal segmentation.

  12. Connections between Graphical Gaussian Models and Factor Analysis

    Science.gov (United States)

    Salgueiro, M. Fatima; Smith, Peter W. F.; McDonald, John W.

    2010-01-01

    Connections between graphical Gaussian models and classical single-factor models are obtained by parameterizing the single-factor model as a graphical Gaussian model. Models are represented by independence graphs, and associations between each manifest variable and the latent factor are measured by factor partial correlations. Power calculations…

  13. A Prototypical 3D Graphical Visualizer for Object-Oriented Systems

    Institute of Scientific and Technical Information of China (English)

    1996-01-01

    is paper describes a framework for visualizing object-oriented systems within a 3D interactive environment.The 3D visualizer represents the structure of a program as Cylinder Net that simultaneously specifies two relationships between objects within 3D virtual space.Additionally,it represents additional relationships on demand when objects are moved into local focus.The 3D visualizer is implemented using a 3D graphics toolkit,TOAST,that implements 3D Widgets 3D graphics to ease the programming task for 3D visualization.

  14. Mining protein kinases regulation using graphical models.

    Science.gov (United States)

    Chen, Qingfeng; Chen, Yi-Ping Phoebe

    2011-03-01

    Abnormal kinase activity is a frequent cause of diseases, which makes kinases a promising pharmacological target. Thus, it is critical to identify the characteristics of protein kinases regulation by studying the activation and inhibition of kinase subunits in response to varied stimuli. Bayesian network (BN) is a formalism for probabilistic reasoning that has been widely used for learning dependency models. However, for high-dimensional discrete random vectors the set of plausible models becomes large and a full comparison of all the posterior probabilities related to the competing models becomes infeasible. A solution to this problem is based on the Markov Chain Monte Carlo (MCMC) method. This paper proposes a BN-based framework to discover the dependency correlations of kinase regulation. Our approach is to apply the MCMC method to generate a sequence of samples from a probability distribution, by which to approximate the distribution. The frequent connections (edges) are identified from the obtained sampling graphical models. Our results point to a number of novel candidate regulation patterns that are interesting in biology and include inferred associations that were unknown.

  15. EasyModeller: A graphical interface to MODELLER

    Directory of Open Access Journals (Sweden)

    Kuntal Bhusan K

    2010-08-01

    Full Text Available Abstract Background MODELLER is a program for automated protein Homology Modeling. It is one of the most widely used tool for homology or comparative modeling of protein three-dimensional structures, but most users find it a bit difficult to start with MODELLER as it is command line based and requires knowledge of basic Python scripting to use it efficiently. Findings The study was designed with an aim to develop of "EasyModeller" tool as a frontend graphical interface to MODELLER using Perl/Tk, which can be used as a standalone tool in windows platform with MODELLER and Python preinstalled. It helps inexperienced users to perform modeling, assessment, visualization, and optimization of protein models in a simple and straightforward way. Conclusion EasyModeller provides a graphical straight forward interface and functions as a stand-alone tool which can be used in a standard personal computer with Microsoft Windows as the operating system.

  16. Variable Relation Parametric Model on Graphics Modelon for Collaboration Design

    Institute of Scientific and Technical Information of China (English)

    DONG Yu-de; ZHAO Han; LI Yan-feng

    2005-01-01

    A new approach to variable relation parametric model for collaboration design based on the graphic modelon has been put forward. The paper gives a parametric description model of graphic modelon, and relating method for different graphic modelon based on variable constraint. At the same time, with the aim of engineering application in the collaboration design, the autonmous constraint in modelon and relative constraint between two modelons are given. Finally, with the tool of variable and relation dbase, the solving method of variable relating and variable-driven among different graphic modelon in a part, and doubleacting variable relating parametric method among different parts for collaboration are given.

  17. Model Evaluation of Continuous Data Pharmacometric Models: Metrics and Graphics.

    Science.gov (United States)

    Nguyen, Tht; Mouksassi, M-S; Holford, N; Al-Huniti, N; Freedman, I; Hooker, A C; John, J; Karlsson, M O; Mould, D R; Pérez Ruixo, J J; Plan, E L; Savic, R; van Hasselt, Jgc; Weber, B; Zhou, C; Comets, E; Mentré, F

    2017-02-01

    This article represents the first in a series of tutorials on model evaluation in nonlinear mixed effect models (NLMEMs), from the International Society of Pharmacometrics (ISoP) Model Evaluation Group. Numerous tools are available for evaluation of NLMEM, with a particular emphasis on visual assessment. This first basic tutorial focuses on presenting graphical evaluation tools of NLMEM for continuous data. It illustrates graphs for correct or misspecified models, discusses their pros and cons, and recalls the definition of metrics used.

  18. Model Evaluation of Continuous Data Pharmacometric Models: Metrics and Graphics

    Science.gov (United States)

    Nguyen, THT; Mouksassi, M‐S; Holford, N; Al‐Huniti, N; Freedman, I; Hooker, AC; John, J; Karlsson, MO; Mould, DR; Pérez Ruixo, JJ; Plan, EL; Savic, R; van Hasselt, JGC; Weber, B; Zhou, C; Comets, E

    2017-01-01

    This article represents the first in a series of tutorials on model evaluation in nonlinear mixed effect models (NLMEMs), from the International Society of Pharmacometrics (ISoP) Model Evaluation Group. Numerous tools are available for evaluation of NLMEM, with a particular emphasis on visual assessment. This first basic tutorial focuses on presenting graphical evaluation tools of NLMEM for continuous data. It illustrates graphs for correct or misspecified models, discusses their pros and cons, and recalls the definition of metrics used. PMID:27884052

  19. PKgraph: an R package for graphically diagnosing population pharmacokinetic models.

    Science.gov (United States)

    Sun, Xiaoyong; Wu, Kai; Cook, Dianne

    2011-12-01

    Population pharmacokinetic (PopPK) modeling has become increasing important in drug development because it handles unbalanced design, sparse data and the study of individual variation. However, the increased complexity of the model makes it more of a challenge to diagnose the fit. Graphics can play an important and unique role in PopPK model diagnostics. The software described in this paper, PKgraph, provides a graphical user interface for PopPK model diagnosis. It also provides an integrated and comprehensive platform for the analysis of pharmacokinetic data including exploratory data analysis, goodness of model fit, model validation and model comparison. Results from a variety of modeling fitting software, including NONMEM, Monolix, SAS and R, can be used. PKgraph is programmed in R, and uses the R packages lattice, ggplot2 for static graphics, and rggobi for interactive graphics.

  20. Retrospective Study on Mathematical Modeling Based on Computer Graphic Processing

    Science.gov (United States)

    Zhang, Kai Li

    Graphics & image making is an important field in computer application, in which visualization software has been widely used with the characteristics of convenience and quick. However, it was thought by modeling designers that the software had been limited in it's function and flexibility because mathematics modeling platform was not built. A non-visualization graphics software appearing at this moment enabled the graphics & image design has a very good mathematics modeling platform. In the paper, a polished pyramid is established by multivariate spline function algorithm, and validate the non-visualization software is good in mathematical modeling.

  1. The complete guide to blender graphics computer modeling and animation

    CERN Document Server

    Blain, John M

    2014-01-01

    Smoothly Leads Users into the Subject of Computer Graphics through the Blender GUIBlender, the free and open source 3D computer modeling and animation program, allows users to create and animate models and figures in scenes, compile feature movies, and interact with the models and create video games. Reflecting the latest version of Blender, The Complete Guide to Blender Graphics: Computer Modeling & Animation, 2nd Edition helps beginners learn the basics of computer animation using this versatile graphics program. This edition incorporates many new features of Blender, including developments

  2. A methodology for acquiring qualitative knowledge for probabilistic graphical models

    DEFF Research Database (Denmark)

    Kjærulff, Uffe Bro; Madsen, Anders L.

    2004-01-01

    We present a practical and general methodology that simplifies the task of acquiring and formulating qualitative knowledge for constructing probabilistic graphical models (PGMs). The methodology efficiently captures and communicates expert knowledge, and has significantly eased the model developm......We present a practical and general methodology that simplifies the task of acquiring and formulating qualitative knowledge for constructing probabilistic graphical models (PGMs). The methodology efficiently captures and communicates expert knowledge, and has significantly eased the model...

  3. Three-dimensional reconstruction of biological objects using a graphics engine.

    Science.gov (United States)

    Winslow, J L; Bjerknes, M; Cheng, H

    1987-12-01

    A common problem in the study of biological material is the determination of three-dimensional structure from serial sections. The large number of sections required to obtain sufficient internal detail of a structure results in enormous processing requirements. These requirements can now be satisfied by current graphics engine technology in combination with image-digitizing hardware. The previously onerous tasks of manipulating and displaying 3D objects become routine with this combination of technologies. We report a computer-assisted reconstruction system on a graphics engine-based workstation. The system accepts images from any video source and includes a utility for aligning adjacent video images. Also available is an editor for geometric object entry and editing. More novel in our approach is the use of video interiors in 3D displays in addition to contours and tiled surfaces. Video interiors is a form of display in which digitized pixels interior to objects are revealed by cutaway blocks.

  4. Using a Graphics Turing Test to Evaluate the Effect of Frame Rate and Motion Blur on Telepresence of Animated Objects

    OpenAIRE

    Borg, Mathias; Johansen, Stine Schmieg; Krog, Kim Srirat; Thomsen, Dennis Lundgaard; Kraus, Martin

    2013-01-01

    A limited Graphics Turing Test is used to determine the frame rate that is required to achieve telepresence of an animated object. For low object velocities of 2.25 and 4.5 degrees of visual angle per second at 60 frames per second a rotating object with no added motion blur is able to pass the test. The results of the experiments confirm previous results in psychophysics and show that the Graphics Turing Test is a useful tool in computer graphics. Even with simulated motion blur, our Graphic...

  5. Using a Graphics Turing Test to Evaluate the Effect of Frame Rate and Motion Blur on Telepresence of Animated Objects

    DEFF Research Database (Denmark)

    Borg, Mathias; Johansen, Stine Schmieg; Krog, Kim Srirat

    2013-01-01

    the test. The results of the experiments confirm previous results in psychophysics and show that the Graphics Turing Test is a useful tool in computer graphics. Even with simulated motion blur, our Graphics Turing Test could not be passed with frame rates of 30 and 20 frames per second. Our results suggest......A limited Graphics Turing Test is used to determine the frame rate that is required to achieve telepresence of an animated object. For low object velocities of 2.25 and 4.5 degrees of visual angle per second at 60 frames per second a rotating object with no added motion blur is able to pass...

  6. GRAPHIC REALIZATION FOUNDATIONS OF LOGIC-SEMANTIC MODELING IN DIDACTICS

    Directory of Open Access Journals (Sweden)

    V. E. Steinberg

    2017-01-01

    Full Text Available Introduction. Nowadays, there are not a lot of works devoted to a graphic method of logic-semantic modeling of knowledge. Meanwhile, an interest towards this method increases due to the fact of essential increase of the content of visual component in information and educational sources. The present publication is the authors’ contribution into the solution of the problem of search of new forms and means convenient for visual and logic perception of a training material, its assimilation, operating by elements of knowledge and their transformations.The aim of the research is to justify graphical implementation of the method of logic-semantic modeling of knowledge, presented by a natural language (training language and to show the possibilities of application of figurative and conceptual models in student teaching.Methodology and research methods. The research methodology is based on the specified activity-regulatory, system-multi-dimensional and structural-invariant approach and the principle of multidimensionality. The methodology the graphic realization of the logic-semantic models in learning technologies is based on didactic design using computer training programs.Results and scientific novelty. Social and anthropological-cultural adaptation bases of the method of logical-semantic knowledge modeling to the problems of didactics are established and reasoned: coordinate-invariant matrix structure is presented as the basis of logical-semantic models of figurative and conceptual nature; the possibilities of using such models as multifunctional didactic regulators – support schemes, navigation in the content of the educational material, educational activities carried out by navigators, etc., are shown. The characteristics of new teaching tools as objects of semiotics and didactic of regulators are considered; their place and role in the structure of the external and internal training curricula learning activities are pointed out

  7. Discrete Discriminant analysis based on tree-structured graphical models

    DEFF Research Database (Denmark)

    Perez de la Cruz, Gonzalo; Eslava, Guillermina

    The purpose of this paper is to illustrate the potential use of discriminant analysis based on tree{structured graphical models for discrete variables. This is done by comparing its empirical performance using estimated error rates for real and simulated data. The results show that discriminant...... analysis based on tree{structured graphical models is a simple nonlinear method competitive with, and sometimes superior to, other well{known linear methods like those assuming mutual independence between variables and linear logistic regression....

  8. Discrete Discriminant analysis based on tree-structured graphical models

    DEFF Research Database (Denmark)

    Perez de la Cruz, Gonzalo; Eslava, Guillermina

    The purpose of this paper is to illustrate the potential use of discriminant analysis based on tree{structured graphical models for discrete variables. This is done by comparing its empirical performance using estimated error rates for real and simulated data. The results show that discriminant a...... analysis based on tree{structured graphical models is a simple nonlinear method competitive with, and sometimes superior to, other well{known linear methods like those assuming mutual independence between variables and linear logistic regression....

  9. 基于UML的面向对象的图形用户界面设计模型%A UML-Based Object-Oriented Graphic User Interface Design Model

    Institute of Scientific and Technical Information of China (English)

    孙晓平; 郭腾冲; 魏明珠; 涂序彦

    2003-01-01

    GUI development is often large, complex and difficult. But there are few methods and tools to describeGUI requirement specifications, GUI layouts and GUI tasks in software design. This article discusses GUI modelingand introduces a UML-based Object-Oriented GUI model composed of Frame Controller, View Model and Core Inter-face(FVI mode), which supports object-oriented requirement specification and provides a layered, modularized and it-erative Object-Oriented GUI design model in terms of GUI Layouts and GUI dynamic interaction tasks. Through aninstance of the model, we demonstrate that utilizing UML to implement Object-Oriented FVI mode incorporates GUIdesign into software process, which improves the integrity and consistence of the software design.

  10. An integrated introduction to computer graphics and geometric modeling

    CERN Document Server

    Goldman, Ronald

    2009-01-01

    … this book may be the first book on geometric modelling that also covers computer graphics. In addition, it may be the first book on computer graphics that integrates a thorough introduction to 'freedom' curves and surfaces and to the mathematical foundations for computer graphics. … the book is well suited for an undergraduate course. … The entire book is very well presented and obviously written by a distinguished and creative researcher and educator. It certainly is a textbook I would recommend. …-Computer-Aided Design, 42, 2010… Many books concentrate on computer programming and soon beco

  11. MAGIC: Model and Graphic Information Converter

    Science.gov (United States)

    Herbert, W. C.

    2009-01-01

    MAGIC is a software tool capable of converting highly detailed 3D models from an open, standard format, VRML 2.0/97, into the proprietary DTS file format used by the Torque Game Engine from GarageGames. MAGIC is used to convert 3D simulations from authoritative sources into the data needed to run the simulations in NASA's Distributed Observer Network. The Distributed Observer Network (DON) is a simulation presentation tool built by NASA to facilitate the simulation sharing requirements of the Data Presentation and Visualization effort within the Constellation Program. DON is built on top of the Torque Game Engine (TGE) and has chosen TGE's Dynamix Three Space (DTS) file format to represent 3D objects within simulations.

  12. Development of a Relap based Nuclear Plant Analyser with 3-D graphics using OpenGL and Object Relap

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young Jin [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2010-10-15

    A 3-D Graphic Nuclear Plant Analyzer (NPA) program was developed using GLScene and the TRelap. GLScene is an OpenGL based 3D graphics library for the Delphi object-oriented program language, and it implements the OpenGL functions in forms suitable for programming with Delphi. TRelap is an object wrapper developed by the author to easily implement the Relap5 thermal hydraulic code under object oriented programming environment. The 3-D Graphic NPA was developed to demonstrate the superiority of the object oriented programming approach in developing complex programs

  13. A Prototype Lisp-Based Soft Real-Time Object-Oriented Graphical User Interface for Control System Development

    Science.gov (United States)

    Litt, Jonathan; Wong, Edmond; Simon, Donald L.

    1994-01-01

    A prototype Lisp-based soft real-time object-oriented Graphical User Interface for control system development is presented. The Graphical User Interface executes alongside a test system in laboratory conditions to permit observation of the closed loop operation through animation, graphics, and text. Since it must perform interactive graphics while updating the screen in real time, techniques are discussed which allow quick, efficient data processing and animation. Examples from an implementation are included to demonstrate some typical functionalities which allow the user to follow the control system's operation.

  14. The gRbase package for graphical modelling in R

    DEFF Research Database (Denmark)

    Højsgaard, Søren; Dethlefsen, Claus

    We have developed a package, called , consisting of a number of classes and associated methods to support the analysis of data using graphical models. It is developed for the open source language, R, and is available for several platforms. The package is intended to be widely extendible and flexi...... these building blocks can be combined and integrated with inference engines in the special cases of hierarchical log-linear models (undirected models). gRbase gRbase dynamicGraph...... and flexible so that package developers may implement further types of graphical models using the available methods. contains methods for representing data, specification of models using a formal language, and is linked to , an interactive graphical user interface for manipulating graphs. We show how...

  15. Modelling of JET diagnostics using Bayesian Graphical Models

    Energy Technology Data Exchange (ETDEWEB)

    Svensson, J. [IPP Greifswald, Greifswald (Germany); Ford, O. [Imperial College, London (United Kingdom); McDonald, D.; Hole, M.; Nessi, G. von; Meakins, A.; Brix, M.; Thomsen, H.; Werner, A.; Sirinelli, A.

    2011-07-01

    The mapping between physics parameters (such as densities, currents, flows, temperatures etc) defining the plasma 'state' under a given model and the raw observations of each plasma diagnostic will 1) depend on the particular physics model used, 2) is inherently probabilistic, from uncertainties on both observations and instrumental aspects of the mapping, such as calibrations, instrument functions etc. A flexible and principled way of modelling such interconnected probabilistic systems is through so called Bayesian graphical models. Being an amalgam between graph theory and probability theory, Bayesian graphical models can simulate the complex interconnections between physics models and diagnostic observations from multiple heterogeneous diagnostic systems, making it relatively easy to optimally combine the observations from multiple diagnostics for joint inference on parameters of the underlying physics model, which in itself can be represented as part of the graph. At JET about 10 diagnostic systems have to date been modelled in this way, and has lead to a number of new results, including: the reconstruction of the flux surface topology and q-profiles without any specific equilibrium assumption, using information from a number of different diagnostic systems; profile inversions taking into account the uncertainties in the flux surface positions and a substantial increase in accuracy of JET electron density and temperature profiles, including improved pedestal resolution, through the joint analysis of three diagnostic systems. It is believed that the Bayesian graph approach could potentially be utilised for very large sets of diagnostics, providing a generic data analysis framework for nuclear fusion experiments, that would be able to optimally utilize the information from multiple diagnostics simultaneously, and where the explicit graph representation of the connections to underlying physics models could be used for sophisticated model testing. This

  16. Integrating Surface Modeling into the Engineering Design Graphics Curriculum

    Science.gov (United States)

    Hartman, Nathan W.

    2006-01-01

    It has been suggested there is a knowledge base that surrounds the use of 3D modeling within the engineering design process and correspondingly within engineering design graphics education. While solid modeling receives a great deal of attention and discussion relative to curriculum efforts, and rightly so, surface modeling is an equally viable 3D…

  17. Integrating Surface Modeling into the Engineering Design Graphics Curriculum

    Science.gov (United States)

    Hartman, Nathan W.

    2006-01-01

    It has been suggested there is a knowledge base that surrounds the use of 3D modeling within the engineering design process and correspondingly within engineering design graphics education. While solid modeling receives a great deal of attention and discussion relative to curriculum efforts, and rightly so, surface modeling is an equally viable 3D…

  18. Linear Characteristic Graphical Models: Representation, Inference and Applications

    CERN Document Server

    Bickson, Danny

    2010-01-01

    Heavy-tailed distributions naturally occur in many real life problems. Unfortunately, it is typically not possible to compute inference in closed-form in graphical models which involve such heavy-tailed distributions. In this work, we propose a novel simple linear graphical model for independent latent random variables, called linear characteristic model (LCM), defined in the characteristic function domain. Using stable distributions, a heavy-tailed family of distributions which is a generalization of Cauchy, L\\'evy and Gaussian distributions, we show for the first time, how to compute both exact and approximate inference in such a linear multivariate graphical model. LCMs are not limited to stable distributions, in fact LCMs are always defined for any random variables (discrete, continuous or a mixture of both). We provide a realistic problem from the field of computer networks to demonstrate the applicability of our construction. Other potential application is iterative decoding of linear channels with non-...

  19. A Graphical User Interface to Generalized Linear Models in MATLAB

    Directory of Open Access Journals (Sweden)

    Peter Dunn

    1999-07-01

    Full Text Available Generalized linear models unite a wide variety of statistical models in a common theoretical framework. This paper discusses GLMLAB-software that enables such models to be fitted in the popular mathematical package MATLAB. It provides a graphical user interface to the powerful MATLAB computational engine to produce a program that is easy to use but with many features, including offsets, prior weights and user-defined distributions and link functions. MATLAB's graphical capacities are also utilized in providing a number of simple residual diagnostic plots.

  20. JACK - ANTHROPOMETRIC MODELING SYSTEM FOR SILICON GRAPHICS WORKSTATIONS

    Science.gov (United States)

    Smith, B.

    1994-01-01

    JACK is an interactive graphics program developed at the University of Pennsylvania that displays and manipulates articulated geometric figures. JACK is typically used to observe how a human mannequin interacts with its environment and what effects body types will have upon the performance of a task in a simulated environment. Any environment can be created, and any number of mannequins can be placed anywhere in that environment. JACK includes facilities to construct limited geometric objects, position figures, perform a variety of analyses on the figures, describe the motion of the figures and specify lighting and surface property information for rendering high quality images. JACK is supplied with a variety of body types pre-defined and known to the system. There are both male and female bodies, ranging from the 5th to the 95th percentile, based on NASA Standard 3000. Each mannequin is fully articulated and reflects the joint limitations of a normal human. JACK is an editor for manipulating previously defined objects known as "Peabody" objects. Used to describe the figures as well as the internal data structure for representing them, Peabody is a language with a powerful and flexible mechanism for representing connectivity between objects, both the joints between individual segments within a figure and arbitrary connections between different figures. Peabody objects are generally comprised of several individual figures, each one a collection of segments. Each segment has a geometry represented by PSURF files that consist of polygons or curved surface patches. Although JACK does not have the capability to create new objects, objects may be created by other geometric modeling programs and then translated into the PSURF format. Environment files are a collection of figures and attributes that may be dynamically moved under the control of an animation file. The animation facilities allow the user to create a sequence of commands that duplicate the movements of a

  1. Engineering graphic modelling a workbook for design engineers

    CERN Document Server

    Tjalve, E; Frackmann Schmidt, F

    2013-01-01

    Engineering Graphic Modelling: A Practical Guide to Drawing and Design covers how engineering drawing relates to the design activity. The book describes modeled properties, such as the function, structure, form, material, dimension, and surface, as well as the coordinates, symbols, and types of projection of the drawing code. The text provides drawing techniques, such as freehand sketching, bold freehand drawing, drawing with a straightedge, a draughting machine or a plotter, and use of templates, and then describes the types of drawing. Graphic designers, design engineers, mechanical engine

  2. VR Lab ISS Graphics Models Data Package

    Science.gov (United States)

    Paddock, Eddie; Homan, Dave; Bell, Brad; Miralles, Evely; Hoblit, Jeff

    2016-01-01

    All the ISS models are saved in AC3D model format which is a text based format that can be loaded into blender and exported to other formats from there including FBX. The models are saved in two different levels of detail, one being labeled "LOWRES" and the other labeled "HIRES". There are two ".str" files (HIRES _ scene _ load.str and LOWRES _ scene _ load.str) that give the hierarchical relationship of the different nodes and the models associated with each node for both the "HIRES" and "LOWRES" model sets. All the images used for texturing are stored in Windows ".bmp" format for easy importing.

  3. Graphical models for inference under outcome-dependent sampling

    DEFF Research Database (Denmark)

    Didelez, V; Kreiner, S; Keiding, N

    2010-01-01

    a node for the sampling indicator, assumptions about sampling processes can be made explicit. We demonstrate how to read off such graphs whether consistent estimation of the association between exposure and outcome is possible. Moreover, we give sufficient graphical conditions for testing and estimating......We consider situations where data have been collected such that the sampling depends on the outcome of interest and possibly further covariates, as for instance in case-control studies. Graphical models represent assumptions about the conditional independencies among the variables. By including...

  4. A graphical Specification Language for Modeling Concurrency based on CSP

    NARCIS (Netherlands)

    Hilderink, Gerald H.; Pascoe, James; Welch, Peter; Loader, Roger; Sunderam, Vaidy

    2002-01-01

    Introduced in this paper is a new graphical modeling language for specifying concurrency in software designs. The language notations are derived from CSP and the resulting designs form CSP diagrams. The notations reflect both data-flow and control-flow aspects, as well as along with CSP algebraic ex

  5. A graphical Specification Language for Modeling Concurrency based on CSP

    NARCIS (Netherlands)

    Hilderink, G.H.; Pascoe, James; Welch, Peter; Loader, Roger; Sunderam, Vaidy

    2002-01-01

    Introduced in this paper is a new graphical modeling language for specifying concurrency in software designs. The language notations are derived from CSP and the resulting designs form CSP diagrams. The notations reflect both data-flow and control-flow aspects, as well as along with CSP algebraic

  6. Graphical modelling language for spycifying concurrency based on CSP

    NARCIS (Netherlands)

    Hilderink, G.H.

    2003-01-01

    Introduced in this (shortened) paper is a graphical modelling language for specifying concurrency in software designs. The language notations are derived from CSP and the resulting designs form CSP diagrams. The notations reflect both data-flow and control-flow aspects of concurrent software

  7. Efficient sampling of Gaussian graphical models using conditional Bayes factors

    NARCIS (Netherlands)

    Hinne, M.; Lenkoski, A.; Heskes, T.M.; Gerven, M.A.J. van

    2014-01-01

    Bayesian estimation of Gaussian graphical models has proven to be challenging because the conjugate prior distribution on the Gaussian precision matrix, the G-Wishart distribution, has a doubly intractable partition function. Recent developments provide a direct way to sample from the G-Wishart

  8. Lightweight Graphical Models for Selectivity Estimation Without Independence Assumptions

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2011-01-01

    ’s optimizers are frequently caused by missed correlations between attributes. We present a selectivity estimation approach that does not make the independence assumptions. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution of all...

  9. Sparse time series chain graphical models for reconstructing genetic networks

    NARCIS (Netherlands)

    Abegaz, Fentaw; Wit, Ernst

    2013-01-01

    We propose a sparse high-dimensional time series chain graphical model for reconstructing genetic networks from gene expression data parametrized by a precision matrix and autoregressive coefficient matrix. We consider the time steps as blocks or chains. The proposed approach explores patterns of co

  10. A focused information criterion for graphical models

    NARCIS (Netherlands)

    Pircalabelu, E.; Claeskens, G.; Waldorp, L.

    2015-01-01

    A new method for model selection for Gaussian Bayesian networks and Markov networks, with extensions towards ancestral graphs, is constructed to have good mean squared error properties. The method is based on the focused information criterion, and offers the possibility of fitting individual-tailore

  11. Analysis of local dependence and multidimensionality in graphical loglinear Rasch models

    DEFF Research Database (Denmark)

    Kreiner, Svend; Christensen, Karl Bang

    local independence; multidimensionality, differential item functioning; uniform local dependency and DIF; graphical Rasch models; loglinear Rasch models......local independence; multidimensionality, differential item functioning; uniform local dependency and DIF; graphical Rasch models; loglinear Rasch models...

  12. Analysis of Local Dependence and Multidimensionality in Graphical Loglinear Rasch Models

    DEFF Research Database (Denmark)

    Kreiner, Svend; Christensen, Karl Bang

    2004-01-01

    Local independence; Multidimensionality; Differential item functioning; Uniform local dependence and DIF; Graphical Rasch models; Loglinear Rasch model......Local independence; Multidimensionality; Differential item functioning; Uniform local dependence and DIF; Graphical Rasch models; Loglinear Rasch model...

  13. Workflow modeling in the graphic arts and printing industry

    Science.gov (United States)

    Tuijn, Chris

    2003-12-01

    The last few years, a lot of effort has been spent on the standardization of the workflow in the graphic arts and printing industry. The main reasons for this standardization are two-fold: first of all, the need to represent all aspects of products, processes and resources in a uniform, digital framework and, secondly, the need to have different systems communicate with each other without having to implement dedicated drivers or protocols. Since many years, a number of organizations in the IT sector have been quite busy developing models and languages on the topic of workflow modeling. In addition to the more formal methods (such as, e.g., extended finite state machines, Petri Nets, Markov Chains etc.) introduced a number of decades ago, more pragmatic methods have been proposed quite recently. We hereby think in particular of the activities of the Workflow Management Coalition that resulted in an XML based Process Definition Language. Although one might be tempted to use the already established standards in the graphic environment, one should be well aware of the complexity and uniqueness of the graphic arts workflow. In this paper, we will show that it is quite hard though not impossible to model the graphic arts workflow using the already established workflow systems. After a brief summary of the graphic arts workflow requirements, we will show why the traditional models are less suitable to use. It will turn out that one of the main reasons for the incompatibility is that the graphic arts workflow is primarily resource driven; this means that the activation of processes depends on the status of different incoming resources. The fact that processes can start running with a partial availability of the input resources is a further complication that asks for additional knowledge on process level. In the second part of this paper, we will discuss in more detail the different software components that are available in any graphic enterprise. In the last part, we will

  14. Reasoning with probabilistic and deterministic graphical models exact algorithms

    CERN Document Server

    Dechter, Rina

    2013-01-01

    Graphical models (e.g., Bayesian and constraint networks, influence diagrams, and Markov decision processes) have become a central paradigm for knowledge representation and reasoning in both artificial intelligence and computer science in general. These models are used to perform many reasoning tasks, such as scheduling, planning and learning, diagnosis and prediction, design, hardware and software verification, and bioinformatics. These problems can be stated as the formal tasks of constraint satisfaction and satisfiability, combinatorial optimization, and probabilistic inference. It is well

  15. Type-2 fuzzy graphical models for pattern recognition

    CERN Document Server

    Zeng, Jia

    2015-01-01

    This book discusses how to combine type-2 fuzzy sets and graphical models to solve a range of real-world pattern recognition problems such as speech recognition, handwritten Chinese character recognition, topic modeling as well as human action recognition. It covers these recent developments while also providing a comprehensive introduction to the fields of type-2 fuzzy sets and graphical models. Though primarily intended for graduate students, researchers and practitioners in fuzzy logic and pattern recognition, the book can also serve as a valuable reference work for researchers without any previous knowledge of these fields. Dr. Jia Zeng is a Professor at the School of Computer Science and Technology, Soochow University, China. Dr. Zhi-Qiang Liu is a Professor at the School of Creative Media, City University of Hong Kong, China.

  16. Parallelizing the Cellular Potts Model on graphics processing units

    Science.gov (United States)

    Tapia, José Juan; D'Souza, Roshan M.

    2011-04-01

    The Cellular Potts Model (CPM) is a lattice based modeling technique used for simulating cellular structures in computational biology. The computational complexity of the model means that current serial implementations restrict the size of simulation to a level well below biological relevance. Parallelization on computing clusters enables scaling the size of the simulation but marginally addresses computational speed due to the limited memory bandwidth between nodes. In this paper we present new data-parallel algorithms and data structures for simulating the Cellular Potts Model on graphics processing units. Our implementations handle most terms in the Hamiltonian, including cell-cell adhesion constraint, cell volume constraint, cell surface area constraint, and cell haptotaxis. We use fine level checkerboards with lock mechanisms using atomic operations to enable consistent updates while maintaining a high level of parallelism. A new data-parallel memory allocation algorithm has been developed to handle cell division. Tests show that our implementation enables simulations of >10 cells with lattice sizes of up to 256 3 on a single graphics card. Benchmarks show that our implementation runs ˜80× faster than serial implementations, and ˜5× faster than previous parallel implementations on computing clusters consisting of 25 nodes. The wide availability and economy of graphics cards mean that our techniques will enable simulation of realistically sized models at a fraction of the time and cost of previous implementations and are expected to greatly broaden the scope of CPM applications.

  17. Learning High-Dimensional Mixtures of Graphical Models

    CERN Document Server

    Anandkumar, A; Kakade, S M

    2012-01-01

    We consider the problem of learning mixtures of discrete graphical models in high dimensions and propose a novel method for estimating the mixture components with provable guarantees. The method proceeds mainly in three stages. In the first stage, it estimates the union of the Markov graphs of the mixture components (referred to as the union graph) via a series of rank tests. It then uses this estimated union graph to compute the mixture components via a spectral decomposition method. The spectral decomposition method was originally proposed for latent class models, and we adapt this method for learning the more general class of graphical model mixtures. In the end, the method produces tree approximations of the mixture components via the Chow-Liu algorithm. Our output is thus a tree-mixture model which serves as a good approximation to the underlying graphical model mixture. When the union graph has sparse node separators, we prove that our method has sample and computational complexities scaling as poly(p, ...

  18. COLBERT: A Scoring Based Graphical Model for Expert Identification

    Science.gov (United States)

    Ahmad, Muhammad Aurangzeb; Zhao, Xin

    In recent years a number of graphical models have been proposed for Topic discovery in various contexts and network analysis. However there is one class of document corpus, documents with ratings, where the problem of topic discovery has not been explored in much detail. In such document corpuses reviews and ratings of documents in addition to the documents themselves are also available. In this paper we address the problem of discovery of latent structures in document-review corpus which can then be used to construct a social network of experts. We present a graphical model COLBERT that automatically discovers latent topics based on the contents of the document, the review of the document and the ratings of the review.

  19. A Graphical μ-Calculus and Local Model Checking

    Institute of Scientific and Technical Information of China (English)

    林惠民

    2002-01-01

    A graphical notation for the propositionalμ-calculus, called modal graphs, ispresented. It is shown that both the textual and equational presentations of theμ-calculus canbe translated into modal graphs. A model checking algorithm based on such graphs is proposed.The algorithm is truly local in the sense that it only generates the parts of the underlyingsearch space which are necessary for the computation of the final result. The correctness of thealgorithm is proven and its complexity analysed.

  20. Identifying gene regulatory network rewiring using latent differential graphical models.

    Science.gov (United States)

    Tian, Dechao; Gu, Quanquan; Ma, Jian

    2016-09-30

    Gene regulatory networks (GRNs) are highly dynamic among different tissue types. Identifying tissue-specific gene regulation is critically important to understand gene function in a particular cellular context. Graphical models have been used to estimate GRN from gene expression data to distinguish direct interactions from indirect associations. However, most existing methods estimate GRN for a specific cell/tissue type or in a tissue-naive way, or do not specifically focus on network rewiring between different tissues. Here, we describe a new method called Latent Differential Graphical Model (LDGM). The motivation of our method is to estimate the differential network between two tissue types directly without inferring the network for individual tissues, which has the advantage of utilizing much smaller sample size to achieve reliable differential network estimation. Our simulation results demonstrated that LDGM consistently outperforms other Gaussian graphical model based methods. We further evaluated LDGM by applying to the brain and blood gene expression data from the GTEx consortium. We also applied LDGM to identify network rewiring between cancer subtypes using the TCGA breast cancer samples. Our results suggest that LDGM is an effective method to infer differential network using high-throughput gene expression data to identify GRN dynamics among different cellular conditions.

  1. Space Object Collision Probability via Monte Carlo on the Graphics Processing Unit

    Science.gov (United States)

    Vittaldev, Vivek; Russell, Ryan P.

    2017-09-01

    Fast and accurate collision probability computations are essential for protecting space assets. Monte Carlo (MC) simulation is the most accurate but computationally intensive method. A Graphics Processing Unit (GPU) is used to parallelize the computation and reduce the overall runtime. Using MC techniques to compute the collision probability is common in literature as the benchmark. An optimized implementation on the GPU, however, is a challenging problem and is the main focus of the current work. The MC simulation takes samples from the uncertainty distributions of the Resident Space Objects (RSOs) at any time during a time window of interest and outputs the separations at closest approach. Therefore, any uncertainty propagation method may be used and the collision probability is automatically computed as a function of RSO collision radii. Integration using a fixed time step and a quartic interpolation after every Runge Kutta step ensures that no close approaches are missed. Two orders of magnitude speedups over a serial CPU implementation are shown, and speedups improve moderately with higher fidelity dynamics. The tool makes the MC approach tractable on a single workstation, and can be used as a final product, or for verifying surrogate and analytical collision probability methods.

  2. Graphics development of DCOR: Deterministic combat model of Oak Ridge

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, G. [Georgia Inst. of Tech., Atlanta, GA (United States); Azmy, Y.Y. [Oak Ridge National Lab., TN (United States)

    1992-10-01

    DCOR is a user-friendly computer implementation of a deterministic combat model developed at ORNL. To make the interpretation of the results more intuitive, a conversion of the numerical solution to a graphic animation sequence of battle evolution is desirable. DCOR uses a coarse computational spatial mesh superimposed on the battlefield. This research is aimed at developing robust methods for computing the position of the combative units over the continuum (and also pixeled) battlefield, from DCOR`s discrete-variable solution representing the density of each force type evaluated at gridpoints. Three main problems have been identified and solutions have been devised and implemented in a new visualization module of DCOR. First, there is the problem of distributing the total number of objects, each representing a combative unit of each force type, among the gridpoints at each time level of the animation. This problem is solved by distributing, for each force type, the total number of combative units, one by one, to the gridpoint with the largest calculated number of units. Second, there is the problem of distributing the number of units assigned to each computational gridpoint over the battlefield area attributed to that point. This problem is solved by distributing the units within that area by taking into account the influence of surrounding gridpoints using linear interpolation. Finally, time interpolated solutions must be generated to produce a sufficient number of frames to create a smooth animation sequence. Currently, enough frames may be generated either by direct computation via the PDE solver or by using linear programming techniques to linearly interpolate intermediate frames between calculated frames.

  3. Greedy Learning of Graphical Models with Small Girth

    Science.gov (United States)

    2013-01-01

    incoherence assumption to guarantee its success. In [18] Bento et al. showed that even for a large class of Ising models the incoherence conditions are not...O ( p2 + p (ξ+1) ) Bento et al. ∆ degree limited, Ising model, correlation decay Ω ( ∆2 (1−2∆ tanh θ)2 log p ) O ( p2 ) TABLE I: Performance...gradually decreases as the path distance between the corresponding nodes increase in the graph G. In [18] Bento et al. showed that learning graphical

  4. Denoising in Wavelet Domain Using Probabilistic Graphical Models

    Directory of Open Access Journals (Sweden)

    Maham Haider

    2016-11-01

    Full Text Available Denoising of real world images that are degraded by Gaussian noise is a long established problem in statistical signal processing. The existing models in time-frequency domain typically model the wavelet coefficients as either independent or jointly Gaussian. However, in the compression arena, techniques like denoising and detection, states the need for models to be non-Gaussian in nature. Probabilistic Graphical Models designed in time-frequency domain, serves the purpose for achieving denoising and compression with an improved performance. In this work, Hidden Markov Model (HMM designed with 2D Discrete Wavelet Transform (DWT is proposed. A comparative analysis of proposed method with different existing techniques: Wavelet based and curvelet based methods in Bayesian Network domain and Empirical Bayesian Approach using Hidden Markov Tree model for denoising has been presented. Results are compared in terms of PSNR and visual quality.

  5. Word-level language modeling for P300 spellers based on discriminative graphical models

    Science.gov (United States)

    Delgado Saa, Jaime F.; de Pesters, Adriana; McFarland, Dennis; Çetin, Müjdat

    2015-04-01

    Objective. In this work we propose a probabilistic graphical model framework that uses language priors at the level of words as a mechanism to increase the performance of P300-based spellers. Approach. This paper is concerned with brain-computer interfaces based on P300 spellers. Motivated by P300 spelling scenarios involving communication based on a limited vocabulary, we propose a probabilistic graphical model framework and an associated classification algorithm that uses learned statistical models of language at the level of words. Exploiting such high-level contextual information helps reduce the error rate of the speller. Main results. Our experimental results demonstrate that the proposed approach offers several advantages over existing methods. Most importantly, it increases the classification accuracy while reducing the number of times the letters need to be flashed, increasing the communication rate of the system. Significance. The proposed approach models all the variables in the P300 speller in a unified framework and has the capability to correct errors in previous letters in a word, given the data for the current one. The structure of the model we propose allows the use of efficient inference algorithms, which in turn makes it possible to use this approach in real-time applications.

  6. A Model for Concurrent Objects

    DEFF Research Database (Denmark)

    Sørensen, Morten U.

    1996-01-01

    We present a model for concurrent objects where obejcts interact by taking part in common events that are closely matched to form call-response pairs, resulting in resulting in rendez-vous like communications. Objects are built from primitive objects by parallel composition, encapsulation and hid...

  7. Experimental Object-Oriented Modelling

    DEFF Research Database (Denmark)

    Hansen, Klaus Marius

    This thesis examines object-oriented modelling in experimental system development. Object-oriented modelling aims at representing concepts and phenomena of a problem domain in terms of classes and objects. Experimental system development seeks active experimentation in a system development project...... through, e.g., technical prototyping and active user involvement. We introduce and examine “experimental object-oriented modelling” as the intersection of these practices. The contributions of this thesis are expected to be within three perspectives on models and modelling in experimental system...... and discuss techniques for handling and representing uncertainty when modelling in experimental system development. These techniques are centred on patterns and styles for handling uncertainty in object-oriented software architectures. Tools We present the Knight tool designed for collaborative modelling...

  8. Objective information about energy models

    Energy Technology Data Exchange (ETDEWEB)

    Hale, D.R. (Energy Information Administration, Washington, DC (United States))

    1993-01-01

    This article describes the Energy Information Administration's program to develop objective information about its modeling systems without hindering model development and applications, and within budget and human resource constraints. 16 refs., 1 fig., 2 tabs.

  9. Formal Transformations from Graphically-Based Object-Oriented Representations to Theory-Based Specifications

    Science.gov (United States)

    1996-06-01

    for Software Synthesis." KBSE 󈨡. IEEE, 1993. 51. Kang, Kyo C., et al. Feature-Oriented Domain Analysis ( FODA ) Feasibility Study. Technical Report...and usefulness in domain analysis and modeling. Rumbaugh uses three distinct views to describe a domain: (1) the object model describes structural...Gibbons describe a methodology where Structured Analysis is used to build a hierarchical system structure chart. This structure chart is then translated

  10. Prediction models from CAD models of 3D objects

    Science.gov (United States)

    Camps, Octavia I.

    1992-11-01

    In this paper we present a probabilistic prediction based approach for CAD-based object recognition. Given a CAD model of an object, the PREMIO system combines techniques of analytic graphics and physical models of lights and sensors to predict how features of the object will appear in images. In nearly 4,000 experiments on analytically-generated and real images, we show that in a semi-controlled environment, predicting the detectability of features of the image can successfully guide a search procedure to make informed choices of model and image features in its search for correspondences that can be used to hypothesize the pose of the object. Furthermore, we provide a rigorous experimental protocol that can be used to determine the optimal number of correspondences to seek so that the probability of failing to find a pose and of finding an inaccurate pose are minimized.

  11. Dynamic Decision Making for Graphical Models Applied to Oil Exploration

    CERN Document Server

    Martinelli, Gabriele; Hauge, Ragnar

    2012-01-01

    We present a framework for sequential decision making in problems described by graphical models. The setting is given by dependent discrete random variables with associated costs or revenues. In our examples, the dependent variables are the potential outcomes (oil, gas or dry) when drilling a petroleum well. The goal is to develop an optimal selection strategy that incorporates a chosen utility function within an approximated dynamic programming scheme. We propose and compare different approximations, from simple heuristics to more complex iterative schemes, and we discuss their computational properties. We apply our strategies to oil exploration over multiple prospects modeled by a directed acyclic graph, and to a reservoir drilling decision problem modeled by a Markov random field. The results show that the suggested strategies clearly improve the simpler intuitive constructions, and this is useful when selecting exploration policies.

  12. Ice-sheet modelling accelerated by graphics cards

    Science.gov (United States)

    Brædstrup, Christian Fredborg; Damsgaard, Anders; Egholm, David Lundbek

    2014-11-01

    Studies of glaciers and ice sheets have increased the demand for high performance numerical ice flow models over the past decades. When exploring the highly non-linear dynamics of fast flowing glaciers and ice streams, or when coupling multiple flow processes for ice, water, and sediment, researchers are often forced to use super-computing clusters. As an alternative to conventional high-performance computing hardware, the Graphical Processing Unit (GPU) is capable of massively parallel computing while retaining a compact design and low cost. In this study, we present a strategy for accelerating a higher-order ice flow model using a GPU. By applying the newest GPU hardware, we achieve up to 180× speedup compared to a similar but serial CPU implementation. Our results suggest that GPU acceleration is a competitive option for ice-flow modelling when compared to CPU-optimised algorithms parallelised by the OpenMP or Message Passing Interface (MPI) protocols.

  13. Design and Application of an Object Oriented Graphical Database Management System for Synthetic Environments

    Science.gov (United States)

    1991-12-01

    terrain in a graphical environment is difficult to accomplish, especially without cues like stereopsis , shadows, sound, pressure, and many other inputs...the end product. It acts as a traffic light, directing messages from one class to the next. In addition to the classes already described in this

  14. Numerical simulation of nonlinear feedback model of saccade generation circuit implemented in the LabView graphical programming language.

    Science.gov (United States)

    Jackson, M E; Gnadt, J W

    1999-03-01

    The object-oriented graphical programming language LabView was used to implement the numerical solution to a computational model of saccade generation in primates. The computational model simulates the activity and connectivity of anatomical strictures known to be involved in saccadic eye movements. The LabView program provides a graphical user interface to the model that makes it easy to observe and modify the behavior of each element of the model. Essential elements of the source code of the LabView program are presented and explained. A copy of the model is available for download from the internet.

  15. OPM Scheme Editor 2: A graphical editor for specifying object-protocol structures

    Energy Technology Data Exchange (ETDEWEB)

    Chen, I-Min A.; Markowitz, V.M.; Pang, F.; Ben-Shachar, O.

    1993-07-01

    This document describes an X-window based Schema Editor for the Object-Protocol Model (OPM). OPM is a data model that supports the specification of complex object and protocol classes. objects and protocols are qualified in OPM by attributes that are defined over (associated with) value classes. Connections of object and protocol classes are expressed in OPM via attributes. OPM supports the specification (expansion) of protocols in terms of alternative and sequences of component (sub) protocols. The OPM Schema Editor allows specifying, displaying, modifying, and browsing through OPM schemas. The OPM Schema Editor generates an output file that can be used as input to an OPM schema translation tool that maps OPM schemas into definitions for relational database management systems. The OPM Schema Editor was implemented using C++ and the X11 based Motif toolkit, on Sun SPARCstation under Sun Unix OS 4.1. This document consists of the following parts: (1) A tutorial consisting of seven introductory lessons for the OPM Schema Editor. (2) A reference manual describing all the windows and functions of the OPM Schema Editor. (3) An appendix with an overview of OPM.

  16. Simulating Lattice Spin Models on Graphics Processing Units

    CERN Document Server

    Levy, Tal; Rabani, Eran; 10.1021/ct100385b

    2012-01-01

    Lattice spin models are useful for studying critical phenomena and allow the extraction of equilibrium and dynamical properties. Simulations of such systems are usually based on Monte Carlo (MC) techniques, and the main difficulty is often the large computational effort needed when approaching critical points. In this work, it is shown how such simulations can be accelerated with the use of NVIDIA graphics processing units (GPUs) using the CUDA programming architecture. We have developed two different algorithms for lattice spin models, the first useful for equilibrium properties near a second-order phase transition point and the second for dynamical slowing down near a glass transition. The algorithms are based on parallel MC techniques, and speedups from 70- to 150-fold over conventional single-threaded computer codes are obtained using consumer-grade hardware.

  17. VIDEO MULTI-TARGET TRACKING BASED ON PROBABILISTIC GRAPHICAL MODEL

    Institute of Scientific and Technical Information of China (English)

    Xu Feng; Huang Chenrong; Wu Zhengjun; Xu Lizhong

    2011-01-01

    In the technique of video multi-target tracking,the common particle filter can not deal well with uncertain relations among multiple targets.To solve this problem,many researchers use data association method to reduce the multi-target uncertainty.However,the traditional data association method is difficult to track accurately when the target is occluded.To remove the occlusion in the video,combined with the theory of data association,this paper adopts the probabilistic graphical model for multi-target modeling and analysis of the targets relationship in the particle filter framework.Experimental results show that the proposed algorithm can solve the occlusion problem better compared with the traditional algorithm.

  18. Object-relational mapping model

    OpenAIRE

    Žukauskas, Arūnas

    2007-01-01

    This work is analyzing problems, arising because of sematical gap between relational and object-oriented approaches and discusses how to utilize object-relational mapping for solving this problem. After analysis of object-relational mapping framework (further – ORM) principles and features of existing ORM frameworks a model is suggested, that allows to implement ORM by utilizing MVP principles in a way that retains major portion of both approach pros and is perfect for transitioning existing ...

  19. PECULIARITIES OF CLOTHES MODELLING BY MEANS OF GRAPHIC DESIGN

    OpenAIRE

    Fatima Mustafa OBARI

    2014-01-01

    The paper examined some aspects of fashion design for garments with application of graphic design software/ facilities. The study focused on the course of designing with the graphics software, creating the dimensional and extensive environment, which influenced the shaping of a garment and alterations to within the scope of current trends in fashion design. The study observed creative sketching and designing tasks, a designed shape, an artist vision of the item in the graphic design. The pape...

  20. Air pollution modelling using a graphics processing unit with CUDA

    CERN Document Server

    Molnar, Ferenc; Meszaros, Robert; Lagzi, Istvan; 10.1016/j.cpc.2009.09.008

    2010-01-01

    The Graphics Processing Unit (GPU) is a powerful tool for parallel computing. In the past years the performance and capabilities of GPUs have increased, and the Compute Unified Device Architecture (CUDA) - a parallel computing architecture - has been developed by NVIDIA to utilize this performance in general purpose computations. Here we show for the first time a possible application of GPU for environmental studies serving as a basement for decision making strategies. A stochastic Lagrangian particle model has been developed on CUDA to estimate the transport and the transformation of the radionuclides from a single point source during an accidental release. Our results show that parallel implementation achieves typical acceleration values in the order of 80-120 times compared to CPU using a single-threaded implementation on a 2.33 GHz desktop computer. Only very small differences have been found between the results obtained from GPU and CPU simulations, which are comparable with the effect of stochastic tran...

  1. Kinematic modelling of disc galaxies using graphics processing units

    Science.gov (United States)

    Bekiaris, G.; Glazebrook, K.; Fluke, C. J.; Abraham, R.

    2016-01-01

    With large-scale integral field spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the graphics processing unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and nested sampling algorithms, but also a naive brute-force approach based on nested grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ˜100 when compared to a single-threaded CPU, and up to a factor of ˜10 when compared to a multithreaded dual CPU configuration. Our method's accuracy, precision and robustness are assessed by successfully recovering the kinematic properties of simulated data, and also by verifying the kinematic modelling results of galaxies from the GHASP and DYNAMO surveys as found in the literature. The resulting GBKFIT code is available for download from: http://supercomputing.swin.edu.au/gbkfit.

  2. Handling geophysical flows: Numerical modelling using Graphical Processing Units

    Science.gov (United States)

    Garcia-Navarro, Pilar; Lacasta, Asier; Juez, Carmelo; Morales-Hernandez, Mario

    2016-04-01

    Computational tools may help engineers in the assessment of sediment transport during the decision-making processes. The main requirements are that the numerical results have to be accurate and simulation models must be fast. The present work is based on the 2D shallow water equations in combination with the 2D Exner equation [1]. The resulting numerical model accuracy was already discussed in previous work. Regarding the speed of the computation, the Exner equation slows down the already costly 2D shallow water model as the number of variables to solve is increased and the numerical stability is more restrictive. On the other hand, the movement of poorly sorted material over steep areas constitutes a hazardous environmental problem. Computational tools help in the predictions of such landslides [2]. In order to overcome this problem, this work proposes the use of Graphical Processing Units (GPUs) for decreasing significantly the simulation time [3, 4]. The numerical scheme implemented in GPU is based on a finite volume scheme. The mathematical model and the numerical implementation are compared against experimental and field data. In addition, the computational times obtained with the Graphical Hardware technology are compared against Single-Core (sequential) and Multi-Core (parallel) CPU implementations. References [Juez et al.(2014)] Juez, C., Murillo, J., & Garca-Navarro, P. (2014) A 2D weakly-coupled and efficient numerical model for transient shallow flow and movable bed. Advances in Water Resources. 71 93-109. [Juez et al.(2013)] Juez, C., Murillo, J., & Garca-Navarro, P. (2013) . 2D simulation of granular flow over irregular steep slopes using global and local coordinates. Journal of Computational Physics. 225 166-204. [Lacasta et al.(2014)] Lacasta, A., Morales-Hernndez, M., Murillo, J., & Garca-Navarro, P. (2014) An optimized GPU implementation of a 2D free surface simulation model on unstructured meshes Advances in Engineering Software. 78 1-15. [Lacasta

  3. Kinematic Modelling of Disc Galaxies using Graphics Processing Units

    CERN Document Server

    Bekiaris, Georgios; Fluke, Christopher J; Abraham, Roberto

    2015-01-01

    With large-scale Integral Field Spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the Graphics Processing Unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and Nested Sampling algorithms, but also a naive brute-force approach based on Nested Grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ~100 when compared to a single-threaded CPU, and up to a factor of ~10 when compared to a multi-threaded dual CPU configuration. Our method's accuracy, precision and robustness a...

  4. Graphic-based musculoskeletal model for biomechanical analyses and animation.

    Science.gov (United States)

    Chao, Edmund Y S

    2003-04-01

    The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the 'Virtual Human' reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. This paper details the design, capabilities, and features of the VIMS development at Johns Hopkins University, an effort possible only through academic and commercial collaborations. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of this unique database and simulation technology. This integrated system will impact on medical education, basic research, device development and application, and clinical patient care related to musculoskeletal diseases, trauma, and rehabilitation.

  5. Graphic Models of Nicknames in the German-Speaking Internet-Space

    Directory of Open Access Journals (Sweden)

    Viktoriya Viktorovna Kazyaba

    2015-12-01

    Full Text Available The object of the study is a specific anthroponymic element of the onomastic system in German – the network name (nickname and its representation in the German section of the Internet. Being a unit of the informal secondary artificial nomination in computer-mediated communication, this anthroponym performs the most essential function – self-nomination of virtual personality. Graphical means of nickname-creation serve as a research subject in the article. The data under analysis are the nicknames and the attendant personal data of Germanspeaking users of such Internet-services like Twitter, ICQ, Facebook, Flickr, World of Tanks, World of Warcraft. It was found that the graphical elements of the nickname-composition realize informative, play-involving, emotional and aesthetic functions, because they provide additional information about the communicants, form their speech masks. The following graphic models of nicknames are revealed and described in the article: 1the nicknames based on numerals and theirs meanings, 2 the nicknames based on signs capitalization, 3 the nicknames with emoticon-smiles, 4 the nicknames constructed with iteration, 5 the nicknames based on «leetspeak», 6 the nicknames based on using of different alphabet signs. The research proves, that the nicknames of the German-speaking Internetusers represent, as a rule, variations of signs from different models.

  6. Feedback Message Passing for Inference in Gaussian Graphical Models

    CERN Document Server

    Liu, Ying; Anandkumar, Animashree; Willsky, Alan S

    2011-01-01

    While loopy belief propagation (LBP) performs reasonably well for inference in some Gaussian graphical models with cycles, its performance is unsatisfactory for many others. In particular for some models LBP does not converge, and in general when it does converge, the computed variances are incorrect (except for cycle-free graphs for which belief propagation (BP) is non-iterative and exact). In this paper we propose {\\em feedback message passing} (FMP), a message-passing algorithm that makes use of a special set of vertices (called a {\\em feedback vertex set} or {\\em FVS}) whose removal results in a cycle-free graph. In FMP, standard BP is employed several times on the cycle-free subgraph excluding the FVS while a special message-passing scheme is used for the nodes in the FVS. The computational complexity of exact inference is $O(k^2n)$, where $k$ is the number of feedback nodes, and $n$ is the total number of nodes. When the size of the FVS is very large, FMP is intractable. Hence we propose {\\em approximat...

  7. Graphical User Interface for Simulink Integrated Performance Analysis Model

    Science.gov (United States)

    Durham, R. Caitlyn

    2009-01-01

    The J-2X Engine (built by Pratt & Whitney Rocketdyne,) in the Upper Stage of the Ares I Crew Launch Vehicle, will only start within a certain range of temperature and pressure for Liquid Hydrogen and Liquid Oxygen propellants. The purpose of the Simulink Integrated Performance Analysis Model is to verify that in all reasonable conditions the temperature and pressure of the propellants are within the required J-2X engine start boxes. In order to run the simulation, test variables must be entered at all reasonable values of parameters such as heat leak and mass flow rate. To make this testing process as efficient as possible in order to save the maximum amount of time and money, and to show that the J-2X engine will start when it is required to do so, a graphical user interface (GUI) was created to allow the input of values to be used as parameters in the Simulink Model, without opening or altering the contents of the model. The GUI must allow for test data to come from Microsoft Excel files, allow those values to be edited before testing, place those values into the Simulink Model, and get the output from the Simulink Model. The GUI was built using MATLAB, and will run the Simulink simulation when the Simulate option is activated. After running the simulation, the GUI will construct a new Microsoft Excel file, as well as a MATLAB matrix file, using the output values for each test of the simulation so that they may graphed and compared to other values.

  8. PECULIARITIES OF CLOTHES MODELLING BY MEANS OF GRAPHIC DESIGN

    Directory of Open Access Journals (Sweden)

    Fatima Mustafa OBARI

    2014-01-01

    Full Text Available The paper examined some aspects of fashion design for garments with application of graphic design software/ facilities. The study focused on the course of designing with the graphics software, creating the dimensional and extensive environment, which influenced the shaping of a garment and alterations to within the scope of current trends in fashion design. The study observed creative sketching and designing tasks, a designed shape, an artist vision of the item in the graphic design. The paper showed the course of applying graphic design that con-sisted of four interrelated stages of design performance, i.e. sketchpad work, morphological and function-specific and manufacturing steps of design. The study has shown the specifics of design performance aimed at raising conceptual and technology-related level of awareness required.

  9. A Software Implementation of an Interactive Graphics System for Three Dimensional Modeling and Layout.

    Science.gov (United States)

    1986-03-01

    C programming language and the IRIS Graphics Library on the Silicon Graphics Inc. IRIS Turbo 2400 interactive graphics system. The first part of the research is concerned with drawing, viewing a 3-D building model, and examining interactive techniques required for building walkthrough mechanims. The second part is concerned with the development of techniques necessary to allow the placement of 3-D piping into a 3-D building model using 2-D graphics display and a mouse device. The algorithms and implementation of these techniques are

  10. A Common Platform for Graphical Models in R: The gRbase Package

    Directory of Open Access Journals (Sweden)

    Claus Dethlefsen

    2005-12-01

    Full Text Available The gRbase package is intended to set the framework for computer packages for data analysis using graphical models. The gRbase package is developed for the open source language, R, and is available for several platforms. The package is intended to be widely extendible and flexible so that package developers may implement further types of graphical models using the available methods. The gRbase package consists of a set of S version 3 classes and associated methods for representing data and models. The package is linked to the dynamicGraph package (Badsberg 2005, an interactive graphical user interface for manipulating graphs.In this paper, we show how these building blocks can be combined and integrated with inference engines in the special cases of hierarchical loglinear models. We also illustrate how to extend the package to deal with other types of graphical models, in this case the graphical Gaussian models.

  11. A Gaussian graphical model approach to climate networks

    Energy Technology Data Exchange (ETDEWEB)

    Zerenner, Tanja, E-mail: tanjaz@uni-bonn.de [Meteorological Institute, University of Bonn, Auf dem Hügel 20, 53121 Bonn (Germany); Friederichs, Petra; Hense, Andreas [Meteorological Institute, University of Bonn, Auf dem Hügel 20, 53121 Bonn (Germany); Interdisciplinary Center for Complex Systems, University of Bonn, Brühler Straße 7, 53119 Bonn (Germany); Lehnertz, Klaus [Department of Epileptology, University of Bonn, Sigmund-Freud-Straße 25, 53105 Bonn (Germany); Helmholtz Institute for Radiation and Nuclear Physics, University of Bonn, Nussallee 14-16, 53115 Bonn (Germany); Interdisciplinary Center for Complex Systems, University of Bonn, Brühler Straße 7, 53119 Bonn (Germany)

    2014-06-15

    Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately.

  12. A Graphical Proof of the Positive Entropy Change in Heat Transfer between Two Objects

    Science.gov (United States)

    Kiatgamolchai, Somchai

    2015-01-01

    It is well known that heat transfer between two objects results in a positive change in the total entropy of the two-object system. The second law of thermodynamics states that the entropy change of a naturally irreversible process is positive. In other words, if the entropy change of any process is positive, it can be inferred that such a process…

  13. Viscoelastic Finite Difference Modeling Using Graphics Processing Units

    Science.gov (United States)

    Fabien-Ouellet, G.; Gloaguen, E.; Giroux, B.

    2014-12-01

    Full waveform seismic modeling requires a huge amount of computing power that still challenges today's technology. This limits the applicability of powerful processing approaches in seismic exploration like full-waveform inversion. This paper explores the use of Graphics Processing Units (GPU) to compute a time based finite-difference solution to the viscoelastic wave equation. The aim is to investigate whether the adoption of the GPU technology is susceptible to reduce significantly the computing time of simulations. The code presented herein is based on the freely accessible software of Bohlen (2002) in 2D provided under a General Public License (GNU) licence. This implementation is based on a second order centred differences scheme to approximate time differences and staggered grid schemes with centred difference of order 2, 4, 6, 8, and 12 for spatial derivatives. The code is fully parallel and is written using the Message Passing Interface (MPI), and it thus supports simulations of vast seismic models on a cluster of CPUs. To port the code from Bohlen (2002) on GPUs, the OpenCl framework was chosen for its ability to work on both CPUs and GPUs and its adoption by most of GPU manufacturers. In our implementation, OpenCL works in conjunction with MPI, which allows computations on a cluster of GPU for large-scale model simulations. We tested our code for model sizes between 1002 and 60002 elements. Comparison shows a decrease in computation time of more than two orders of magnitude between the GPU implementation run on a AMD Radeon HD 7950 and the CPU implementation run on a 2.26 GHz Intel Xeon Quad-Core. The speed-up varies depending on the order of the finite difference approximation and generally increases for higher orders. Increasing speed-ups are also obtained for increasing model size, which can be explained by kernel overheads and delays introduced by memory transfers to and from the GPU through the PCI-E bus. Those tests indicate that the GPU memory size

  14. POMP - Pervasive Object Model Project

    DEFF Research Database (Denmark)

    Schougaard, Kari Rye; Schultz, Ulrik Pagh

    applications, we consider it essential that a standard object-oriented style of programming can be used for those parts of the application that do not concern its mobility. This position paper describes an ongoing effort to implement a language and a virtual machine for applications that execute in a pervasive...... mobility. Mobile agent platforms are often based on such virtual machines, but typically do not provide strong mobility (the ability to migrate at any program point), and have limited support for multi-threaded applications, although there are exceptions. For a virtual machine to support mobile...... computing environment. This system, named POM (Pervasive Object Model), supports applications split into coarse-grained, strongly mobile units that communicate using method invocations through proxies. We are currently investigating efficient execution of mobile applications, scalability to suit...

  15. Ferromanganese Furnace Modelling Using Object-Oriented Principles

    Energy Technology Data Exchange (ETDEWEB)

    Wasboe, S.O.

    1996-12-31

    This doctoral thesis defines an object-oriented framework for aiding unit process modelling and applies it to model high-carbon ferromanganese furnaces. A framework is proposed for aiding modelling of the internal topology and the phenomena taking place inside unit processes. Complex unit processes may consist of a number of zones where different phenomena take place. A topology is therefore defined for the unit process itself, which shows the relations between the zones. Inside each zone there is a set of chemical species and phenomena, such as reactions, phase transitions, heat transfer etc. A formalized graphical methodology is developed as a tool for modelling these zones and their interaction. The symbols defined in the graphical framework are associated with objects and classes. The rules for linking the objects are described using OMT (Object Modeling Technique) diagrams and formal language formulations. The basic classes that are defined are implemented using the C++ programming language. The ferromanganese process is a complex unit process. A general description of the process equipment is given, and a detailed discussion of the process itself and a system theoretical overview of it. The object-oriented framework is then used to develop a dynamic model based on mass and energy balances. The model is validated by measurements from an industrial furnace. 101 refs., 119 figs., 20 tabs.

  16. A probabilistic graphical model approach to stochastic multiscale partial differential equations

    Energy Technology Data Exchange (ETDEWEB)

    Wan, Jiang [Materials Process Design and Control Laboratory, Sibley School of Mechanical and Aerospace Engineering, Cornell University, 101 Frank H.T. Rhodes Hall, Ithaca, NY 14853-3801 (United States); Zabaras, Nicholas, E-mail: nzabaras@gmail.com [Materials Process Design and Control Laboratory, Sibley School of Mechanical and Aerospace Engineering, Cornell University, 101 Frank H.T. Rhodes Hall, Ithaca, NY 14853-3801 (United States); Center for Applied Mathematics, Cornell University, 657 Frank H.T. Rhodes Hall, Ithaca, NY 14853 (United States)

    2013-10-01

    We develop a probabilistic graphical model based methodology to efficiently perform uncertainty quantification in the presence of both stochastic input and multiple scales. Both the stochastic input and model responses are treated as random variables in this framework. Their relationships are modeled by graphical models which give explicit factorization of a high-dimensional joint probability distribution. The hyperparameters in the probabilistic model are learned using sequential Monte Carlo (SMC) method, which is superior to standard Markov chain Monte Carlo (MCMC) methods for multi-modal distributions. Finally, we make predictions from the probabilistic graphical model using the belief propagation algorithm. Numerical examples are presented to show the accuracy and efficiency of the predictive capability of the developed graphical model.

  17. Distributed Object Medical Imaging Model

    Directory of Open Access Journals (Sweden)

    Ahmad Shukri Mohd Noor

    2009-09-01

    Full Text Available Digital medical informatics and images are commonly used in hospitals today. Because of the interrelatedness of the radiology department and other departments, especially the intensive care unit and emergency department, the transmission and sharing of medical images has become a critical issue. Our research group has developed a Java-based Distributed Object Medical Imaging Model(DOMIM to facilitate the rapid development and deployment of medical imaging applications in a distributed environment that can be shared and used by related departments and mobile physiciansDOMIM is a unique suite of multimedia telemedicine applications developed for the use by medical related organizations. The applications support realtime patients' data, image files, audio and video diagnosis annotation exchanges. The DOMIM enables joint collaboration between radiologists and physicians while they are at distant geographical locations. The DOMIM environment consists of heterogeneous, autonomous, and legacy resources. The Common Object Request Broker Architecture (CORBA, Java Database Connectivity (JDBC, and Java language provide the capability to combine the DOMIM resources into an integrated, interoperable, and scalable system. The underneath technology, including IDL ORB, Event Service, IIOP JDBC/ODBC, legacy system wrapping and Java implementation are explored. This paper explores a distributed collaborative CORBA/JDBC based framework that will enhance medical information management requirements and development. It encompasses a new paradigm for the delivery of health services that requires process reengineering, cultural changes, as well as organizational changes.

  18. Distributed Object Medical Imaging Model

    CERN Document Server

    Noor, Ahmad Shukri Mohd

    2009-01-01

    Digital medical informatics and images are commonly used in hospitals today,. Because of the interrelatedness of the radiology department and other departments, especially the intensive care unit and emergency department, the transmission and sharing of medical images has become a critical issue. Our research group has developed a Java-based Distributed Object Medical Imaging Model(DOMIM) to facilitate the rapid development and deployment of medical imaging applications in a distributed environment that can be shared and used by related departments and mobile physiciansDOMIM is a unique suite of multimedia telemedicine applications developed for the use by medical related organizations. The applications support realtime patients' data, image files, audio and video diagnosis annotation exchanges. The DOMIM enables joint collaboration between radiologists and physicians while they are at distant geographical locations. The DOMIM environment consists of heterogeneous, autonomous, and legacy resources. The Common...

  19. Testing coeffcients of AR and bilinear time series models by a graphical approach

    Institute of Scientific and Technical Information of China (English)

    IP; WaiCheung

    2008-01-01

    AR and bilinear time series models are expressed as time series chain graphical models, based on which, it is shown that the coefficients of AR and bilinear models are the conditional correlation coefficients conditioned on the other components of the time series. Then a graphically based procedure is proposed to test the significance of the coeffcients of AR and bilinear time series. Simulations show that our procedure performs well both in sizes and powers.

  20. Co-occurrence rate networks: towards separate training for undirected graphical models

    NARCIS (Netherlands)

    Zhu, Zhemin

    2015-01-01

    Dependence is a universal phenomenon which can be observed everywhere. In machine learning, probabilistic graphical models (PGMs) represent dependence relations with graphs. PGMs find wide applications in natural language processing (NLP), speech processing, computer vision, biomedicine, information

  1. Discrete-time dynamic graphical games:model-free reinforcement learning solution

    Institute of Scientific and Technical Information of China (English)

    Mohammed I ABOUHEAF; Frank L LEWIS; Magdi S MAHMOUD; Dariusz G MIKULSKI

    2015-01-01

    This paper introduces a model-free reinforcement learning technique that is used to solve a class of dynamic games known as dynamic graphical games. The graphical game results from multi-agent dynamical systems, where pinning control is used to make all the agents synchronize to the state of a command generator or a leader agent. Novel coupled Bellman equations and Hamiltonian functions are developed for the dynamic graphical games. The Hamiltonian mechanics are used to derive the necessary conditions for optimality. The solution for the dynamic graphical game is given in terms of the solution to a set of coupled Hamilton-Jacobi-Bellman equations developed herein. Nash equilibrium solution for the graphical game is given in terms of the solution to the underlying coupled Hamilton-Jacobi-Bellman equations. An online model-free policy iteration algorithm is developed to learn the Nash solution for the dynamic graphical game. This algorithm does not require any knowledge of the agents’ dynamics. A proof of convergence for this multi-agent learning algorithm is given under mild assumption about the inter-connectivity properties of the graph. A gradient descent technique with critic network structures is used to implement the policy iteration algorithm to solve the graphical game online in real-time.

  2. Assessing the Graphical and Algorithmic Structure of Hierarchical Coloured Petri Net Models

    Directory of Open Access Journals (Sweden)

    George Benwell

    1994-11-01

    Full Text Available Petri nets, as a modelling formalism, are utilised for the analysis of processes, whether for explicit understanding, database design or business process re-engineering. The formalism, however, can be represented on a virtual continuum from highly graphical to largely algorithmic. The use and understanding of the formalism will, in part, therefore depend on the resultant complexity and power of the representation and, on the graphical or algorithmic preference of the user. This paper develops a metric which will indicate the graphical or algorithmic tendency of hierarchical coloured Petri nets.

  3. Medical image segmentation using object atlas versus object cloud models

    Science.gov (United States)

    Phellan, Renzo; Falcão, Alexandre X.; Udupa, Jayaram K.

    2015-03-01

    Medical image segmentation is crucial for quantitative organ analysis and surgical planning. Since interactive segmentation is not practical in a production-mode clinical setting, automatic methods based on 3D object appearance models have been proposed. Among them, approaches based on object atlas are the most actively investigated. A key drawback of these approaches is that they require a time-costly image registration process to build and deploy the atlas. Object cloud models (OCM) have been introduced to avoid registration, considerably speeding up the whole process, but they have not been compared to object atlas models (OAM). The present paper fills this gap by presenting a comparative analysis of the two approaches in the task of individually segmenting nine anatomical structures of the human body. Our results indicate that OCM achieve a statistically significant better accuracy for seven anatomical structures, in terms of Dice Similarity Coefficient and Average Symmetric Surface Distance.

  4. Convergent and Correct Message Passing Schemes for Optimization Problems over Graphical Models

    CERN Document Server

    Ruozzi, Nicholas

    2010-01-01

    The max-product algorithm, which attempts to compute the most probable assignment (MAP) of a given probability distribution, has recently found applications in quadratic minimization and combinatorial optimization. Unfortunately, the max-product algorithm is not guaranteed to converge and, even if it does, is not guaranteed to produce the MAP assignment. In this work, we provide a simple derivation of a new family of message passing algorithms. We first show how to arrive at this general message passing scheme by "splitting" the factors of our graphical model and then we demonstrate that this construction can be extended beyond integral splitting. We prove that, for any objective function which attains its maximum value over its domain, this new family of message passing algorithms always contains a message passing scheme that guarantees correctness upon convergence to a unique estimate. We then adopt a serial message passing schedule and prove that, under mild assumptions, such a schedule guarantees the conv...

  5. OMTROLL - Object Modeling in Troll

    NARCIS (Netherlands)

    Lipeck, Udo W.; Wieringa, Roelf J.; Koschorrek, G.; Jungclaus, R.; Hartel, P.; Saake, G.; Hartmann, T.

    We make an attempt to use concepts of the OMT analysis stage to develop formal object-oriented specifications in the Troll language. The purpose is twofold: on the one hand, ambiguities, vaguenesses, etc.\\ in OMT (and other OOA approaches) can be discovered and eliminated easier; furthermore, clear

  6. OMTROLL - Object Modeling in Troll

    NARCIS (Netherlands)

    Wieringa, R.J.; Jungclaus, R.; Hartel, P.; Saake, G.; Hartmann, T.; Lipeck, Udo W.; Koschorrek, G.

    1993-01-01

    We make an attempt to use concepts of the OMT analysis stage to develop formal object-oriented specifications in the Troll language. The purpose is twofold: on the one hand, ambiguities, vaguenesses, etc. in OMT (and other OOA approaches) can be discovered and eliminated easier; furthermore, clear s

  7. An Algebraic Graphical Model for Decision with Uncertainties, Feasibilities, and Utilities

    CERN Document Server

    Pralet, C; Verfaillie, G; 10.1613/jair.2151

    2011-01-01

    Numerous formalisms and dedicated algorithms have been designed in the last decades to model and solve decision making problems. Some formalisms, such as constraint networks, can express "simple" decision problems, while others are designed to take into account uncertainties, unfeasible decisions, and utilities. Even in a single formalism, several variants are often proposed to model different types of uncertainty (probability, possibility...) or utility (additive or not). In this article, we introduce an algebraic graphical model that encompasses a large number of such formalisms: (1) we first adapt previous structures from Friedman, Chu and Halpern for representing uncertainty, utility, and expected utility in order to deal with generic forms of sequential decision making; (2) on these structures, we then introduce composite graphical models that express information via variables linked by "local" functions, thanks to conditional independence; (3) on these graphical models, we finally define a simple class ...

  8. A graphical model framework for decoding in the visual ERP-based BCI speller

    NARCIS (Netherlands)

    Martens, S.M.M.; Mooij, J.M.; Hill, N.J.; Farquhar, J.D.R.; Schölkopf, B.

    2011-01-01

    We present a graphical model framework for decoding in the visual ERP-based speller system. The proposed framework allows researchers to build generative models from which the decoding rules are obtained in a straightforward manner. We suggest two models for generating brain signals conditioned on

  9. Teaching Photovoltaic Array Modelling and Characterization Using a Graphical User Interface and a Flash Solar Simulator

    DEFF Research Database (Denmark)

    Spataru, Sergiu; Sera, Dezso; Kerekes, Tamas

    2012-01-01

    This paper presents a set of laboratory tools aimed to support students with various backgrounds (no programming) to understand photovoltaic array modelling and characterization techniques. A graphical user interface (GUI) has been developed in Matlab, for modelling PV arrays and characterizing...

  10. Planning of O&M for Offfshore Wind Turbines using Bayesian Graphical Models

    DEFF Research Database (Denmark)

    Nielsen, Jannie Jessen; Sørensen, John Dalsgaard

    2010-01-01

    The costs to operation and maintenance (O&M) for offshore wind turbines are large, and riskbased planning of O&M has the potential of reducing these costs. This paper presents how Bayesian graphical models can be used to establish a probabilistic damage model and include data from imperfect...

  11. SQL3 Object Model and Its Extension

    Institute of Scientific and Technical Information of China (English)

    ZHUANG Ji-feng; PENG Zhi-yong

    2004-01-01

    As the latest version of relational database standard, SQL3 not only has been extended with many new relational features but also added with the object-oriented technologies.This paper introduces the object-oriented features of SQL3 and then extends it with object deputy model to support object view mechanisms.

  12. Compositional Model-Views with Generic Graphical User Interfaces

    NARCIS (Netherlands)

    Achten, P.M.; Eekelen, M.C.J.D. van; Plasmeijer, M.J.

    2004-01-01

    Creating GUI programs is hard even for prototyping purposes. Using the model-view paradigm makes it somewhat simpler since the model-view paradigm dictates that the model contains no GUI programming, as this is done by the views. Still, a lot of GUI programming is needed to implement the views. We

  13. Graphics gems

    CERN Document Server

    Heckbert, Paul S

    1994-01-01

    Graphics Gems IV contains practical techniques for 2D and 3D modeling, animation, rendering, and image processing. The book presents articles on polygons and polyhedral; a mix of formulas, optimized algorithms, and tutorial information on the geometry of 2D, 3D, and n-D space; transformations; and parametric curves and surfaces. The text also includes articles on ray tracing; shading 3D models; and frame buffer techniques. Articles on image processing; algorithms for graphical layout; basic interpolation methods; and subroutine libraries for vector and matrix algebra are also demonstrated. Com

  14. Computer Aided Design Modeling for Heterogeneous Objects

    CERN Document Server

    Gupta, Vikas; Tandon, Puneet

    2010-01-01

    Heterogeneous object design is an active research area in recent years. The conventional CAD modeling approaches only provide geometry and topology of the object, but do not contain any information with regard to the materials of the object and so can not be used for the fabrication of heterogeneous objects (HO) through rapid prototyping. Current research focuses on computer-aided design issues in heterogeneous object design. A new CAD modeling approach is proposed to integrate the material information into geometric regions thus model the material distributions in the heterogeneous object. The gradient references are used to represent the complex geometry heterogeneous objects which have simultaneous geometry intricacies and accurate material distributions. The gradient references helps in flexible manipulability and control to heterogeneous objects, which guarantees the local control over gradient regions of developed heterogeneous objects. A systematic approach on data flow, processing, computer visualizat...

  15. Design Graphics

    Science.gov (United States)

    1990-01-01

    A mathematician, David R. Hedgley, Jr. developed a computer program that considers whether a line in a graphic model of a three-dimensional object should or should not be visible. Known as the Hidden Line Computer Code, the program automatically removes superfluous lines and displays an object from a specific viewpoint, just as the human eye would see it. An example of how one company uses the program is the experience of Birdair which specializes in production of fabric skylights and stadium covers. The fabric called SHEERFILL is a Teflon coated fiberglass material developed in cooperation with DuPont Company. SHEERFILL glazed structures are either tension structures or air-supported tension structures. Both are formed by patterned fabric sheets supported by a steel or aluminum frame or cable network. Birdair uses the Hidden Line Computer Code, to illustrate a prospective structure to an architect or owner. The program generates a three- dimensional perspective with the hidden lines removed. This program is still used by Birdair and continues to be commercially available to the public.

  16. Graphical models and Bayesian domains in risk modelling: application in microbiological risk assessment.

    Science.gov (United States)

    Greiner, Matthias; Smid, Joost; Havelaar, Arie H; Müller-Graf, Christine

    2013-05-15

    Quantitative microbiological risk assessment (QMRA) models are used to reflect knowledge about complex real-world scenarios for the propagation of microbiological hazards along the feed and food chain. The aim is to provide insight into interdependencies among model parameters, typically with an interest to characterise the effect of risk mitigation measures. A particular requirement is to achieve clarity about the reliability of conclusions from the model in the presence of uncertainty. To this end, Monte Carlo (MC) simulation modelling has become a standard in so-called probabilistic risk assessment. In this paper, we elaborate on the application of Bayesian computational statistics in the context of QMRA. It is useful to explore the analogy between MC modelling and Bayesian inference (BI). This pertains in particular to the procedures for deriving prior distributions for model parameters. We illustrate using a simple example that the inability to cope with feedback among model parameters is a major limitation of MC modelling. However, BI models can be easily integrated into MC modelling to overcome this limitation. We refer a BI submodel integrated into a MC model to as a "Bayes domain". We also demonstrate that an entire QMRA model can be formulated as Bayesian graphical model (BGM) and discuss the advantages of this approach. Finally, we show example graphs of MC, BI and BGM models, highlighting the similarities among the three approaches.

  17. On a Graphical Technique for Evaluating Some Rational Expectations Models

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders R.

    2011-01-01

    . In addition to getting a visual impression of the fit of the model, the purpose is to see if the two spreads are nevertheless similar as measured by correlation, variance ratio, and noise ratio. We extend these techniques to a number of rational expectation models and give a general definition of spread...

  18. Graphical Gaussian models with edge and vertex symmetries

    DEFF Research Database (Denmark)

    Højsgaard, Søren; Lauritzen, Steffen L

    2008-01-01

    study the properties of such models and derive the necessary algorithms for calculating maximum likelihood estimates. We identify conditions for restrictions on the concentration and correlation matrices being equivalent. This is for example the case when symmetries are generated by permutation...... of variable labels. For such models a particularly simple maximization of the likelihood function is available...

  19. Object Oriented Modeling Of Social Networks

    NARCIS (Netherlands)

    Zeggelink, Evelien P.H.; Oosten, Reinier van; Stokman, Frans N.

    1996-01-01

    The aim of this paper is to explain principles of object oriented modeling in the scope of modeling dynamic social networks. As such, the approach of object oriented modeling is advocated within the field of organizational research that focuses on networks. We provide a brief introduction into the f

  20. Context-specific graphical models for discret longitudinal data

    DEFF Research Database (Denmark)

    Edwards, David; Anantharama Ankinakatte, Smitha

    2015-01-01

    Ron et al. (1998) introduced a rich family of models for discrete longitudinal data called acyclic probabilistic finite automata. These may be represented as directed graphs that embody context-specific conditional independence relations. Here, the approach is developed from a statistical...... perspective. It is shown here that likelihood ratio tests may be constructed using standard contingency table methods, a model selection procedure that minimizes a penalized likelihood criterion is described, and a way to extend the models to incorporate covariates is proposed. The methods are applied...

  1. Model Verification and Validation Using Graphical Information Systems Tools

    Science.gov (United States)

    2013-07-31

    Marques, W. C., E. H. L. Fernandes, B. C. Moraes, O. O. Möller, and A. Malcherek (2010), Dynamics of the Patos Lagoon coastal plume and its...multiple hurricane beds in the northern Gulf of Mexico , Marine Geology, Volume 210, Issues 1-4, Storms and their significance in coastal morpho-sedimentary...accuracy of model forecasts of currents in coastal areas. The MVV module is implemented as part of the Geospatial Analysis and Model Evaluation Software

  2. Estimating a graphical intra-class correlation coefficient (GICC) using multivariate probit-linear mixed models.

    Science.gov (United States)

    Yue, Chen; Chen, Shaojie; Sair, Haris I; Airan, Raag; Caffo, Brian S

    2015-09-01

    Data reproducibility is a critical issue in all scientific experiments. In this manuscript, the problem of quantifying the reproducibility of graphical measurements is considered. The image intra-class correlation coefficient (I2C2) is generalized and the graphical intra-class correlation coefficient (GICC) is proposed for such purpose. The concept for GICC is based on multivariate probit-linear mixed effect models. A Markov Chain Monte Carlo EM (mcm-cEM) algorithm is used for estimating the GICC. Simulation results with varied settings are demonstrated and our method is applied to the KIRBY21 test-retest dataset.

  3. Thermal model for discrete vegetation and its solution on pixel scale using computer graphics

    Institute of Scientific and Technical Information of China (English)

    苏红波; 张仁华; 唐新斋; 孙晓敏; 朱治林

    2000-01-01

    In this paper, we discuss how the multi-reflection of thermal emission affects the calculation of radiation balance. With the help of computer graphics, the four components of discrete vegetation are analyzed in detail and the curves of BRDF for the discrete vegetation can be obtained as well. A new model is put forward to inverse the temperatures of four components. The solution obtained by using computer graphics is consistent with observations in the field experiment in Yucheng Remote Sensing Comprehensive Site of CAS. Furthermore, the method can be used to retrieve land surface temperature based on multi-angle thermal infrared remotely sensed data.

  4. Thermal model for discrete vegetation and its solution on pixel scale using computer graphics

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    In this paper,we discuss how the multi-reflection of thermal emission affects the calculation of radiation balance.With the help of computer graphics,the four components of discrete vegetation are analyzed in detail and the curves of BRDF for the discrete vegetation can be obtained as well.A new model is put forward to inverse the temperatures of four components.The solution obtained by using computer graphics is consistent with observations in the field experiment in Yucheng Remote Sensing Comprehensive Site of CAS.Furthermore,the method can be used to retrieve land surface temperature based on multi-angle thermal infrared remotely sensed data.

  5. Scaling-up spatially-explicit ecological models using graphics processors

    NARCIS (Netherlands)

    Koppel, Johan van de; Gupta, Rohit; Vuik, Cornelis

    2011-01-01

    How the properties of ecosystems relate to spatial scale is a prominent topic in current ecosystem research. Despite this, spatially explicit models typically include only a limited range of spatial scales, mostly because of computing limitations. Here, we describe the use of graphics processors to

  6. Design and analysis of CMOS analog signal processing circuits by means of a graphical MOST model

    NARCIS (Netherlands)

    Wallinga, Hans; Bult, Klaas

    1989-01-01

    A graphical representation of a simple MOST (metal-oxide-semiconductor transistor) model for the analysis of analog MOS circuits operating in strong inversion is given. It visualizes the principles of signal-processing techniques depending on the characteristics of an MOS transistor. Several lineari

  7. Copula Gaussian graphical models with penalized ascent Monte Carlo EM algorithm

    NARCIS (Netherlands)

    Abegaz, Fentaw; Wit, Ernst

    2015-01-01

    Typical data that arise from surveys, experiments, and observational studies include continuous and discrete variables. In this article, we study the interdependence among a mixed (continuous, count, ordered categorical, and binary) set of variables via graphical models. We propose an (1)-penalized

  8. Parallelized CCHE2D flow model with CUDA Fortran on Graphics Process Units

    Science.gov (United States)

    This paper presents the CCHE2D implicit flow model parallelized using CUDA Fortran programming technique on Graphics Processing Units (GPUs). A parallelized implicit Alternating Direction Implicit (ADI) solver using Parallel Cyclic Reduction (PCR) algorithm on GPU is developed and tested. This solve...

  9. Scaling-up spatially-explicit ecological models using graphics processors

    NARCIS (Netherlands)

    Koppel, Johan van de; Gupta, Rohit; Vuik, Cornelis

    2011-01-01

    How the properties of ecosystems relate to spatial scale is a prominent topic in current ecosystem research. Despite this, spatially explicit models typically include only a limited range of spatial scales, mostly because of computing limitations. Here, we describe the use of graphics processors to

  10. ModelMuse: A U.S. Geological Survey Open-Source, Graphical User Interface for Groundwater Models

    Science.gov (United States)

    Winston, R. B.

    2013-12-01

    ModelMuse is a free publicly-available graphical preprocessor used to generate the input and display the output for several groundwater models. It is written in Object Pascal and the source code is available on the USGS software web site. Supported models include the MODFLOW family of models, PHAST (version 1), and SUTRA version 2.2. With MODFLOW and PHAST, the user generates a grid and uses 'objects' (points, lines, and polygons) to define boundary conditions and the spatial variation in aquifer properties. Because the objects define the spatial variation, the grid can be changed without the user needing to re-enter spatial data. The same paradigm is used with SUTRA except that the user generates a quadrilateral finite-element mesh instead of a rectangular grid. The user interacts with the model in a top view and in a vertical cross section. The cross section can be at any angle or location. There is also a three-dimensional view of the model. For SUTRA, a new method of visualizing the permeability and related properties has been introduced. In three dimensional SUTRA models, the user specifies the permeability tensor by specifying permeability in three mutually orthogonal directions that can be oriented in space in any direction. Because it is important for the user to be able to check both the magnitudes and directions of the permeabilities, ModelMuse displays the permeabilities as either a two-dimensional or a three-dimensional vector plot. Color is used to differentiate the maximum, middle, and minimum permeability vectors. The magnitude of the permeability is shown by the vector length. The vector angle shows the direction of the maximum, middle, or minimum permeability. Contour and color plots can also be used to display model input and output data.

  11. Full Stokes finite-element modeling of ice sheets using a graphics processing unit

    Science.gov (United States)

    Seddik, H.; Greve, R.

    2016-12-01

    Thermo-mechanical simulation of ice sheets is an important approach to understand and predict their evolution in a changing climate. For that purpose, higher order (e.g., ISSM, BISICLES) and full Stokes (e.g., Elmer/Ice, http://elmerice.elmerfem.org) models are increasingly used to more accurately model the flow of entire ice sheets. In parallel to this development, the rapidly improving performance and capabilities of Graphics Processing Units (GPUs) allows to efficiently offload more calculations of complex and computationally demanding problems on those devices. Thus, in order to continue the trend of using full Stokes models with greater resolutions, using GPUs should be considered for the implementation of ice sheet models. We developed the GPU-accelerated ice-sheet model Sainō. Sainō is an Elmer (http://www.csc.fi/english/pages/elmer) derivative implemented in Objective-C which solves the full Stokes equations with the finite element method. It uses the standard OpenCL language (http://www.khronos.org/opencl/) to offload the assembly of the finite element matrix on the GPU. A mesh-coloring scheme is used so that elements with the same color (non-sharing nodes) are assembled in parallel on the GPU without the need for synchronization primitives. The current implementation shows that, for the ISMIP-HOM experiment A, during the matrix assembly in double precision with 8000, 87,500 and 252,000 brick elements, Sainō is respectively 2x, 10x and 14x faster than Elmer/Ice (when both models are run on a single processing unit). In single precision, Sainō is even 3x, 20x and 25x faster than Elmer/Ice. A detailed description of the comparative results between Sainō and Elmer/Ice will be presented, and further perspectives in optimization and the limitations of the current implementation.

  12. Probabilistic assessment of agricultural droughts using graphical models

    Science.gov (United States)

    Ramadas, Meenu; Govindaraju, Rao S.

    2015-07-01

    Agricultural droughts are often characterized by soil moisture in the root zone of the soil, but crop needs are rarely factored into the analysis. Since water needs vary with crops, agricultural drought incidences in a region can be characterized better if crop responses to soil water deficits are also accounted for in the drought index. This study investigates agricultural droughts driven by plant stress due to soil moisture deficits using crop stress functions available in the literature. Crop water stress is assumed to begin at the soil moisture level corresponding to incipient stomatal closure, and reaches its maximum at the crop's wilting point. Using available location-specific crop acreage data, a weighted crop water stress function is computed. A new probabilistic agricultural drought index is then developed within a hidden Markov model (HMM) framework that provides model uncertainty in drought classification and accounts for time dependence between drought states. The proposed index allows probabilistic classification of the drought states and takes due cognizance of the stress experienced by the crop due to soil moisture deficit. The capabilities of HMM model formulations for assessing agricultural droughts are compared to those of current drought indices such as standardized precipitation evapotranspiration index (SPEI) and self-calibrating Palmer drought severity index (SC-PDSI). The HMM model identified critical drought events and several drought occurrences that are not detected by either SPEI or SC-PDSI, and shows promise as a tool for agricultural drought studies.

  13. Counterfactual Graphical Models for Longitudinal Mediation Analysis with Unobserved Confounding

    OpenAIRE

    Shpitser, Ilya

    2012-01-01

    Questions concerning mediated causal effects are of great interest in psychology, cognitive science, medicine, social science, public health, and many other disciplines. For instance, about 60% of recent papers published in leading journals in social psychology contain at least one mediation test (Rucker, Preacher, Tormala, & Petty, 2011). Standard parametric approaches to mediation analysis employ regression models, and either the "difference method" (Judd & Kenny, 1981), more common in epid...

  14. Lao Graphic Enterprise, Analysis with model metrics Scor

    OpenAIRE

    Papanicolau Denegri, Jorge Nicolás; Universidad San Juan Bautista; Evangelista Yzaguirre, Luis; Universidad Nacional Mayor de San Marcos

    2016-01-01

    In this article, the relevance of the Scor model for an analysis as is a company presents itself as demonstrated in this research and reverse the deficiencies. En este artículo se presenta, la relevancia que tiene el modelo Scor para efectuar un análisis como se encuentra una empresa, tal como se demuestra en esta investigación y revertir las deficiencias.

  15. The Composite OLAP-Object Data Model

    Energy Technology Data Exchange (ETDEWEB)

    Pourabbas, Elaheh; Shoshani, Arie

    2005-12-07

    In this paper, we define an OLAP-Object model that combines the main characteristics of OLAP and Object data models in order to achieve their functionalities in a common framework. We classify three different object classes: primitive, regular and composite. Then, we define a query language which uses the path concept in order to facilitate data navigation and data manipulation. The main feature of the proposed language is an anchor. It allows us to fix dynamically an object class (primitive, regular or composite) along the paths over the OLAP-Object data model for expressing queries. The queries can be formulated on objects, composite objects and combination of both. The power of the proposed query language is investigated through multiple query examples. The semantic of different clauses and syntax of the proposed language are investigated.

  16. A visual graphic/haptic rendering model for hysteroscopic procedures.

    Science.gov (United States)

    Lim, Fabian; Brown, Ian; McColl, Ryan; Seligman, Cory; Alsaraira, Amer

    2006-03-01

    Hysteroscopy is an extensively popular option in evaluating and treating women with infertility. The procedure utilises an endoscope, inserted through the vagina and cervix to examine the intra-uterine cavity via a monitor. The difficulty of hysteroscopy from the surgeon's perspective is the visual spatial perception of interpreting 3D images on a 2D monitor, and the associated psychomotor skills in overcoming the fulcrum-effect. Despite the widespread use of this procedure, current qualified hysteroscopy surgeons have not been trained the fundamentals through an organised curriculum. The emergence of virtual reality as an educational tool for this procedure, and for other endoscopic procedures, has undoubtedly raised interests. The ultimate objective is for the inclusion of virtual reality training as a mandatory component for gynaecologic endoscopy training. Part of this process involves the design of a simulator, encompassing the technical difficulties and complications associated with the procedure. The proposed research examines fundamental hysteroscopy factors, current training and accreditation, and proposes a hysteroscopic simulator design that is suitable for educating and training.

  17. An Interactive 3D Graphics Modeler Based on Simulated Human Immune System

    Directory of Open Access Journals (Sweden)

    Hiroaki Nishino

    2008-07-01

    Full Text Available We propose an intuitive computer graphics authoring method based on interactive evolutionary computation (IEC. Our previous systems employed genetic algorithm (GA and mainly focused on rapid exploration of a single optimum 3D graphics model. The proposed method adopts a different computation strategy called immune algorithm (IA to ease the creation of varied 3D models even if a user doesn’t have any specific idea of final 3D products. Because artistic work like graphics design needs a process to diversify the user’s imagery, a tool that allows the user to select his/her preferred ones from a broad range of possible design solutions is particularly desired. IA enables the user to effectively explore a wealth of solutions in a huge 3D parametric space by using its essential mechanisms such as antibody formation and self-regulating function. We conducted an experiment to verify the effectiveness of the proposed method. The results show that the proposed method helps the user to easily generating wide variety of 3D graphics models.

  18. Graphical Geometric and Learning/Optimization-Based Methods in Statistical Signal and Image Processing Object Recognition and Data Fusion

    Science.gov (United States)

    2008-03-01

    models is computationally intractable). Applications ranging from hyperspectral data analysis to multimodal fusion for object classification will... CLASSIFICATION I18. SECURITY CLASSIFICATION 19. SECURITY CLASSIFICATION 20. LIMITATION OF ABSTRACT OR REPORT ON THIS PAGE OF ABSTRACT UNCLASSIFIED UNCLASSIFIED...from complex signals in an unsupervised learning context. The principle we have adopted in this and in our other work in this area is that of maximizing

  19. Learning models of activities involving interacting objects

    DEFF Research Database (Denmark)

    Manfredotti, Cristina; Pedersen, Kim Steenstrup; Hamilton, Howard J.;

    2013-01-01

    We propose the LEMAIO multi-layer framework, which makes use of hierarchical abstraction to learn models for activities involving multiple interacting objects from time sequences of data concerning the individual objects. Experiments in the sea navigation domain yielded learned models that were...

  20. Learning models of activities involving interacting objects

    DEFF Research Database (Denmark)

    Manfredotti, Cristina; Pedersen, Kim Steenstrup; Hamilton, Howard J.

    2013-01-01

    We propose the LEMAIO multi-layer framework, which makes use of hierarchical abstraction to learn models for activities involving multiple interacting objects from time sequences of data concerning the individual objects. Experiments in the sea navigation domain yielded learned models that were t...

  1. Modeling And Simulation As The Basis For Hybridity In The Graphic Discipline Learning/Teaching Area

    Directory of Open Access Journals (Sweden)

    Jana Žiljak Vujić

    2009-01-01

    Full Text Available Only some fifteen years have passed since the scientific graphics discipline was established. In the transition period from the College of Graphics to «Integrated Graphic Technology Studies» to the contemporary Faculty of Graphics Arts with the University in Zagreb, three main periods of development can be noted: digital printing, computer prepress and automatic procedures in postpress packaging production. Computer technology has enabled a change in the methodology of teaching graphics technology and studying it on the level of secondary and higher education. The task has been set to create tools for simulating printing processes in order to master the program through a hybrid system consisting of methods that are separate in relation to one another: learning with the help of digital models and checking in the actual real system. We are setting a hybrid project for teaching because the overall acquired knowledge is the result of completely different methods. The first method is on the free programs level functioning without consequences. Everything remains as a record in the knowledge database that can be analyzed, statistically processed and repeated with new parameter values of the system being researched. The second method uses the actual real system where the results are in proving the value of new knowledge and this is something that encourages and stimulates new cycles of hybrid behavior in mastering programs. This is the area where individual learning incurs. The hybrid method allows the possibility of studying actual situations on a computer model, proving it on an actual real model and entering the area of learning envisaging future development.

  2. Modeling and Simulation as the Basis for Hybridity in the Graphic Discipline Learning/Teaching Area

    Directory of Open Access Journals (Sweden)

    Vilko Ziljak

    2009-11-01

    Full Text Available Only some fifteen years have passed since the scientific graphics discipline was established. In the transition period from the College of Graphics to «Integrated Graphic Technology Studies» to the contemporary Faculty of Graphics Arts with the University in Zagreb, three main periods of development can be noted: digital printing, computer prepress and automatic procedures in postpress packaging production. Computer technology has enabled a change in the methodology of teaching graphics technology and studying it on the level of secondary and higher education. The task has been set to create tools for simulating printing processes in order to master the program through a hybrid system consisting of methods that are separate in relation to one another: learning with the help of digital models and checking in the actual real system.  We are setting a hybrid project for teaching because the overall acquired knowledge is the result of completely different methods. The first method is on the free programs level functioning without consequences. Everything remains as a record in the knowledge database that can be analyzed, statistically processed and repeated with new parameter values of the system being researched. The second method uses the actual real system where the results are in proving the value of new knowledge and this is something that encourages and stimulates new cycles of hybrid behavior in mastering programs. This is the area where individual learning incurs. The hybrid method allows the possibility of studying actual situations on a computer model, proving it on an actual real model and entering the area of learning envisaging future development.

  3. Experiments with a low-cost system for computer graphics material model acquisition

    Science.gov (United States)

    Rushmeier, Holly; Lockerman, Yitzhak; Cartwright, Luke; Pitera, David

    2015-03-01

    We consider the design of an inexpensive system for acquiring material models for computer graphics rendering applications in animation, games and conceptual design. To be useful in these applications a system must be able to model a rich range of appearances in a computationally tractable form. The range of appearance of interest in computer graphics includes materials that have spatially varying properties, directionality, small-scale geometric structure, and subsurface scattering. To be computationally tractable, material models for graphics must be compact, editable, and efficient to numerically evaluate for ray tracing importance sampling. To construct appropriate models for a range of interesting materials, we take the approach of separating out directly and indirectly scattered light using high spatial frequency patterns introduced by Nayar et al. in 2006. To acquire the data at low cost, we use a set of Raspberry Pi computers and cameras clamped to miniature projectors. We explore techniques to separate out surface and subsurface indirect lighting. This separation would allow the fitting of simple, and so tractable, analytical models to features of the appearance model. The goal of the system is to provide models for physically accurate renderings that are visually equivalent to viewing the original physical materials.

  4. Learning models of activities involving interacting objects

    DEFF Research Database (Denmark)

    Manfredotti, Cristina; Pedersen, Kim Steenstrup; Hamilton, Howard J.;

    2013-01-01

    We propose the LEMAIO multi-layer framework, which makes use of hierarchical abstraction to learn models for activities involving multiple interacting objects from time sequences of data concerning the individual objects. Experiments in the sea navigation domain yielded learned models that were...... then successfully applied to activity recognition, activity simulation and multi-target tracking. Our method compares favourably with respect to previously reported results using Hidden Markov Models and Relational Particle Filtering....

  5. Object-oriented Modular Model Library for Distillation

    Institute of Scientific and Technical Information of China (English)

    CHEN Chang; DING Jianwan; CHEN Liping

    2013-01-01

    For modeling and simulation of distillation process,there are lots of special purpose simulators along with their model libraries,such as Aspen Plus and HYSYS.However,the models in these tools lack of flexibility and are not open to the end-user.Models developed in one tool can not be conveniently used in others because of the barriers among these simulators.In order to solve those problems,a flexible and extensible distillation system model library is constructed in this study,based on the Modelica and Modelica-supported platform MWorks,by the object-oriented technology and level progressive modeling strategy.It supports the reuse of knowledge on different granularities:physical phenomenon,unit model and system model.It is also an interface-friendly,accurate,fast PC-based and easily reusable simulation tool,which enables end-user to customize and extend the framework to add new functionality or adapt the simulation behavior as required.It also allows new models to be composed programmatically or graphically to form more complex models by invoking the existing components.A conventional air distillation column model is built and calculated using the library,and the results agree well with that simulated in Aspen Plus.

  6. Probabilistic object and viewpoint models for active object recognition

    CSIR Research Space (South Africa)

    Govender, N

    2013-09-01

    Full Text Available For mobile robots to perform certain tasks in human environments, fast and accurate object verification and recognition is essential. Bayesian approaches to active object recognition have proved effective in a number of cases, allowing information...

  7. Extending Model Checking to Object Process Validation

    NARCIS (Netherlands)

    Rein, van H.

    2002-01-01

    Object-oriented techniques allow the gathering and modelling of system requirements in terms of an application area. The expression of data and process models at that level is a great asset in communication with non-technical people in that area, but it does not necessarily lead to consistent models

  8. Moving objects management models, techniques and applications

    CERN Document Server

    Meng, Xiaofeng; Xu, Jiajie

    2014-01-01

    This book describes the topics of moving objects modeling and location tracking, indexing and querying, clustering, location uncertainty, traffic aware navigation and privacy issues as well as the application to intelligent transportation systems.

  9. A Jet Model of HH Objects

    Science.gov (United States)

    Tenorio-Tagle, Guillermo; Rozyczka, Michal; Cantó, Jorge

    It is shown by means of two dimensional hydrodynamical simulations that Canto's model of HH objects does not lead to an entirely stationary flow. It causes both a stationary cylindrical shock (at the interaction place) and a fast moving region emitting an HH spectra. A coherent picture of these phenomena is here presented. The models account for "optical jet" structures embedded in a much larger (>0.1 pc) hydrodynamical jet at the tip of which an HH object should be found.

  10. 基于Java的面向对象交互式图形工具箱%Java-based Object-oriented Interactive Graphics Toolkit

    Institute of Scientific and Technical Information of China (English)

    刘庆芳; 华庆一; 李光俊; 芦宏亮; 蔚娣

    2009-01-01

    为在开发交互式图形应用过程中有效地建立、表示和管理代表应用数据的直接操作图形对象,设计一个基于Java的面向对象交互式图形工具箱--JOOIGT,该工具箱利用面向对象技术和Java基本类库,提供多种图形对象以及图形界面框架,使设计者只需定义对象,而不必考虑绘图细节,简化操作流程,利用JOOIGT建立的图形界面不仅可以运行于本地窗13程序中,也能运行在Web程序或嵌入式程序中.%In order to set up, show, and manage the direct manipulation graph objects representative of application data effectively in development of interactive graphics application, a Java-based object-oriented interactive graphics toolkit called JOOIGT is designed. It uses object-oriented technique and Java foundation classes to provide a set of graphic objects and framework supporting user interface, which makes the developers only need to define objects without considering the detail of drawing, The operating process is simplified. The user interfaces based on JOOIGT can not only run in the widow application, but also run in Web application or embedded application.

  11. From least squares to multilevel modeling: A graphical introduction to Bayesian inference

    Science.gov (United States)

    Loredo, Thomas J.

    2016-01-01

    This tutorial presentation will introduce some of the key ideas and techniques involved in applying Bayesian methods to problems in astrostatistics. The focus will be on the big picture: understanding the foundations (interpreting probability, Bayes's theorem, the law of total probability and marginalization), making connections to traditional methods (propagation of errors, least squares, chi-squared, maximum likelihood, Monte Carlo simulation), and highlighting problems where a Bayesian approach can be particularly powerful (Poisson processes, density estimation and curve fitting with measurement error). The "graphical" component of the title reflects an emphasis on pictorial representations of some of the math, but also on the use of graphical models (multilevel or hierarchical models) for analyzing complex data. Code for some examples from the talk will be available to participants, in Python and in the Stan probabilistic programming language.

  12. Counterfactual Graphical Models for Mediation Analysis via Path-Specific Effects

    CERN Document Server

    Shpitser, Ilya

    2012-01-01

    Potential outcome counterfactuals represent variation in the outcome of interest after a hypothetical treatment or intervention is performed. Causal graphical models are a concise, intuitive way of representing causal assumptions, including independence constraints among such counterfactuals. Much of modern causal inference is concerned with expressing cause effect relationships of interest in counterfactual form, showing how the resulting counterfactuals can be identified (that is expressed in terms of available data, using domain-specific causal assumptions), and subsequently estimated using statistical methods. In this paper we will use causal graphical models to analyze the identification problem of the so-called \\emph{path-specific effects}, that is effects of treatment on outcome along certain specified causal paths. Such effects arise in mediation analysis settings where it's important to distinguish direct and indirect effects of treatment. We review existing results on path-specific effects in the fu...

  13. Finding Non-overlapping Clusters for Generalized Inference Over Graphical Models

    CERN Document Server

    Vats, Divyanshu

    2011-01-01

    Graphical models compactly capture stochastic dependencies amongst a collection of random variables using a graph. Inference over graphical models corresponds to finding marginal probability distributions given joint probability distributions. Several inference algorithms rely on iterative message passing between nodes. Although these algorithms can be generalized so that the message passing occurs between clusters of nodes, there are limited frameworks for finding such clusters. Moreover, current frameworks rely on finding clusters that are overlapping. This increases the computational complexity of finding clusters since the edges over a graph with overlapping clusters must be chosen carefully to avoid inconsistencies in the marginal distribution computations. In this paper, we propose a framework for finding clusters in a graph for generalized inference so that the clusters are \\emph{non-overlapping}. Given an undirected graph, we first derive a linear time algorithm for constructing a block-tree, a tree-s...

  14. A probabilistic graphical model approach in 30 m land cover mapping with multiple data sources

    OpenAIRE

    Wang, Jie; Ji, Luyan; Huang, Xiaomeng; Fu, Haohuan; Xu, Shiming; Li, Congcong

    2016-01-01

    There is a trend to acquire high accuracy land-cover maps using multi-source classification methods, most of which are based on data fusion, especially pixel- or feature-level fusions. A probabilistic graphical model (PGM) approach is proposed in this research for 30 m resolution land-cover mapping with multi-temporal Landsat and MODerate Resolution Imaging Spectroradiometer (MODIS) data. Independent classifiers were applied to two single-date Landsat 8 scenes and the MODIS time-series data, ...

  15. Graphical Models for Recovering Probabilistic and Causal Queries from Missing Data

    Science.gov (United States)

    2014-11-01

    queries of the form P (y|do(x)). We show that causal queries may be recoverable even when the factors in their identifying estimands are not...well as causal queries of the form P(yjdo(x)). We show that causal queries may be recoverable even when the factors in their identifying estimands are...Graphical Models for Recovering Probabilistic and Causal Queries from Missing Data Karthika Mohan and Judea Pearl Cognitive Systems Laboratory

  16. MODELING FUZZY GEOGRAPHIC OBJECTS WITHIN FUZZY FIELDS

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    To improve the current GIS functions in describing geographic objects w ith fuzziness,this paper begins with a discussion on the distance measure of sp atial objects based on the theory of sets and an introduction of dilation and er osion operators.Under the assumption that changes of attributes in a geographic region are gradual,the analytic expressions for the fuzzy objects of points,l ines and areas,and the description of their formal structures are presented.Th e analytic model of geographic objects by means of fuzzy fields is developed.We have shown that the 9-intersection model proposed by Egenhofer and Franzosa (19 91) is a special case of the model presented in the paper.

  17. Learned graphical models for probabilistic planning provide a new class of movement primitives.

    Science.gov (United States)

    Rückert, Elmar A; Neumann, Gerhard; Toussaint, Marc; Maass, Wolfgang

    2012-01-01

    BIOLOGICAL MOVEMENT GENERATION COMBINES THREE INTERESTING ASPECTS: its modular organization in movement primitives (MPs), its characteristics of stochastic optimality under perturbations, and its efficiency in terms of learning. A common approach to motor skill learning is to endow the primitives with dynamical systems. Here, the parameters of the primitive indirectly define the shape of a reference trajectory. We propose an alternative MP representation based on probabilistic inference in learned graphical models with new and interesting properties that complies with salient features of biological movement control. Instead of endowing the primitives with dynamical systems, we propose to endow MPs with an intrinsic probabilistic planning system, integrating the power of stochastic optimal control (SOC) methods within a MP. The parameterization of the primitive is a graphical model that represents the dynamics and intrinsic cost function such that inference in this graphical model yields the control policy. We parameterize the intrinsic cost function using task-relevant features, such as the importance of passing through certain via-points. The system dynamics as well as intrinsic cost function parameters are learned in a reinforcement learning (RL) setting. We evaluate our approach on a complex 4-link balancing task. Our experiments show that our movement representation facilitates learning significantly and leads to better generalization to new task settings without re-learning.

  18. Sculpting proteins interactively: continual energy minimization embedded in a graphical modeling system.

    Science.gov (United States)

    Surles, M C; Richardson, J S; Richardson, D C; Brooks, F P

    1994-02-01

    We describe a new paradigm for modeling proteins in interactive computer graphics systems--continual maintenance of a physically valid representation, combined with direct user control and visualization. This is achieved by a fast algorithm for energy minimization, capable of real-time performance on all atoms of a small protein, plus graphically specified user tugs. The modeling system, called Sculpt, rigidly constrains bond lengths, bond angles, and planar groups (similar to existing interactive modeling programs), while it applies elastic restraints to minimize the potential energy due to torsions, hydrogen bonds, and van der Waals and electrostatic interactions (similar to existing batch minimization programs), and user-specified springs. The graphical interface can show bad and/or favorable contacts, and individual energy terms can be turned on or off to determine their effects and interactions. Sculpt finds a local minimum of the total energy that satisfies all the constraints using an augmented Lagrange-multiplier method; calculation time increases only linearly with the number of atoms because the matrix of constraint gradients is sparse and banded. On a 100-MHz MIPS R4000 processor (Silicon Graphics Indigo), Sculpt achieves 11 updates per second on a 20-residue fragment and 2 updates per second on an 80-residue protein, using all atoms except non-H-bonding hydrogens, and without electrostatic interactions. Applications of Sculpt are described: to reverse the direction of bundle packing in a designed 4-helix bundle protein, to fold up a 2-stranded beta-ribbon into an approximate beta-barrel, and to design the sequence and conformation of a 30-residue peptide that mimics one partner of a protein subunit interaction. Computer models that are both interactive and physically realistic (within the limitations of a given force field) have 2 significant advantages: (1) they make feasible the modeling of very large changes (such as needed for de novo design), and

  19. Building Mathematical Models Of Solid Objects

    Science.gov (United States)

    Randall, Donald P.; Jones, Kennie H.; Von Ofenheim, William H.; Gates, Raymond L.; Matthews, Christine G.

    1989-01-01

    Solid Modeling Program (SMP) version 2.0 provides capability to model complex solid objects mathematically through aggregation of geometric primitives (parts). System provides designer with basic set of primitive parts and capability to define new primitives. Six primitives included in present version: boxes, cones, spheres, paraboloids, tori, and trusses. Written in VAX/VMS FORTRAN 77.

  20. The PC graphics handbook

    CERN Document Server

    Sanchez, Julio

    2003-01-01

    Part I - Graphics Fundamentals PC GRAPHICS OVERVIEW History and Evolution Short History of PC Video PS/2 Video Systems SuperVGA Graphics Coprocessors and Accelerators Graphics Applications State-of-the-Art in PC Graphics 3D Application Programming Interfaces POLYGONAL MODELING Vector and Raster Data Coordinate Systems Modeling with Polygons IMAGE TRANSFORMATIONS Matrix-based Representations Matrix Arithmetic 3D Transformations PROGRAMMING MATRIX TRANSFORMATIONS Numeric Data in Matrix Form Array Processing PROJECTIONS AND RENDERING Perspective The Rendering Pipeline LIGHTING AND SHADING Lightin

  1. The Effectiveness of an Interactive 3-Dimensional Computer Graphics Model for Medical Education

    Science.gov (United States)

    Konishi, Takeshi; Tamura, Yoko; Moriguchi, Hiroki

    2012-01-01

    Background Medical students often have difficulty achieving a conceptual understanding of 3-dimensional (3D) anatomy, such as bone alignment, muscles, and complex movements, from 2-dimensional (2D) images. To this end, animated and interactive 3-dimensional computer graphics (3DCG) can provide better visual information to users. In medical fields, research on the advantages of 3DCG in medical education is relatively new. Objective To determine the educational effectiveness of interactive 3DCG. Methods We divided 100 participants (27 men, mean (SD) age 17.9 (0.6) years, and 73 women, mean (SD) age 18.1 (1.1) years) from the Health Sciences University of Mongolia (HSUM) into 3DCG (n = 50) and textbook-only (control) (n = 50) groups. The control group used a textbook and 2D images, while the 3DCG group was trained to use the interactive 3DCG shoulder model in addition to a textbook. We conducted a questionnaire survey via an encrypted satellite network between HSUM and Tokushima University. The questionnaire was scored on a 5-point Likert scale from strongly disagree (score 1) to strongly agree (score 5). Results Interactive 3DCG was effective in undergraduate medical education. Specifically, there was a significant difference in mean (SD) scores between the 3DCG and control groups in their response to questionnaire items regarding content (4.26 (0.69) vs 3.85 (0.68), P = .001) and teaching methods (4.33 (0.65) vs 3.74 (0.79), P < .001), but no significant difference in the Web category. Participants also provided meaningful comments on the advantages of interactive 3DCG. Conclusions Interactive 3DCG materials have positive effects on medical education when properly integrated into conventional education. In particular, our results suggest that interactive 3DCG is more efficient than textbooks alone in medical education and can motivate students to understand complex anatomical structures. PMID:23611759

  2. Object tracking using active appearance models

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille

    2001-01-01

    This paper demonstrates that (near) real-time object tracking can be accomplished by the deformable template model; the Active Appearance Model (AAM) using only low-cost consumer electronics such as a PC and a web-camera. Successful object tracking of perspective, rotational and translational...... transformations was carried out using a training set of five images. The tracker was automatically initialised by a described multi-scale initialisation method and achieved a performance in the range of 7-10 frames per second....

  3. DIFFUSION BACKGROUND MODEL FOR MOVING OBJECTS DETECTION

    Directory of Open Access Journals (Sweden)

    B. V. Vishnyakov

    2015-05-01

    Full Text Available In this paper, we propose a new approach for moving objects detection in video surveillance systems. It is based on construction of the regression diffusion maps for the image sequence. This approach is completely different from the state of the art approaches. We show that the motion analysis method, based on diffusion maps, allows objects that move with different speed or even stop for a short while to be uniformly detected. We show that proposed model is comparable to the most popular modern background models. We also show several ways of speeding up diffusion maps algorithm itself.

  4. A Module for Graphical Display of Model Results with the CBP Toolbox

    Energy Technology Data Exchange (ETDEWEB)

    Smith, F. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-04-21

    This report describes work performed by the Savannah River National Laboratory (SRNL) in fiscal year 2014 to add enhanced graphical capabilities to display model results in the Cementitious Barriers Project (CBP) Toolbox. Because Version 2.0 of the CBP Toolbox has just been released, the graphing enhancements described in this report have not yet been integrated into a new version of the Toolbox. Instead they have been tested using a standalone GoldSim model and, while they are substantially complete, may undergo further refinement before full implementation. Nevertheless, this report is issued to document the FY14 development efforts which will provide a basis for further development of the CBP Toolbox.

  5. Resurfacing Graphics

    Directory of Open Access Journals (Sweden)

    Prof. Patty K. Wongpakdee

    2013-06-01

    Full Text Available “Resurfacing Graphics” deals with the subject of unconventional design, with the purpose of engaging the viewer to experience the graphics beyond paper’s passive surface. Unconventional designs serve to reinvigorate people, whose senses are dulled by the typical, printed graphics, which bombard them each day. Today’s cutting-edge designers, illustrators and artists utilize graphics in a unique manner that allows for tactile interaction. Such works serve as valuable teaching models and encourage students to do the following: 1 investigate the trans-disciplines of art and technology; 2 appreciate that this approach can have a positive effect on the environment; 3 examine and research other approaches of design communications and 4 utilize new mediums to stretch the boundaries of artistic endeavor. This paper examines how visuals communicators are “Resurfacing Graphics” by using atypical surfaces and materials such as textile, wood, ceramics and even water. Such non-traditional transmissions of visual language serve to demonstrate student’s overreliance on paper as an outdated medium. With this exposure, students can become forward-thinking, eco-friendly, creative leaders by expanding their creative breadth and continuing the perpetual exploration for new ways to make their mark.

  6. Resurfacing Graphics

    Directory of Open Access Journals (Sweden)

    Prof. Patty K. Wongpakdee

    2013-06-01

    Full Text Available “Resurfacing Graphics” deals with the subject of unconventional design, with the purpose of engaging the viewer to experience the graphics beyond paper’s passive surface. Unconventional designs serve to reinvigorate people, whose senses are dulled by the typical, printed graphics, which bombard them each day. Today’s cutting-edge designers, illustrators and artists utilize graphics in a unique manner that allows for tactile interaction. Such works serve as valuable teaching models and encourage students to do the following: 1 investigate the trans-disciplines of art and technology; 2 appreciate that this approach can have a positive effect on the environment; 3 examine and research other approaches of design communications and 4 utilize new mediums to stretch the boundaries of artistic endeavor. This paper examines how visuals communicators are “Resurfacing Graphics” by using atypical surfaces and materials such as textile, wood, ceramics and even water. Such non-traditional transmissions of visual language serve to demonstrate student’s overreliance on paper as an outdated medium. With this exposure, students can become forward-thinking, eco-friendly, creative leaders by expanding their creative breadth and continuing the perpetual exploration for new ways to make their mark. 

  7. The internal/external issue what is an outer object? Another person as object and as separate other in object relations models.

    Science.gov (United States)

    Zachrisson, Anders

    2013-01-01

    The question of what we mean by the term outer object has its roots in the epistemological foundation of psychoanalysis. From the very beginning, Freud's view was Kantian, and psychoanalysis has kept that stance, as it seems. The author reviews the internal/external issue in Freud's thinking and in the central object relations theories (Klein, Winnicott, and Bion). On this background he proposes a simple model to differentiate the concept of object along one central dimension: internal object, external object, and actual person. The main arguments are: (1) there is no direct, unmediated perception of the actual person--the experience of the other is always affected by the perceiver's subjectivity; (2) in intense transference reactions and projections, the perception of the person is dominated by the qualities of an inner object--and the other person "becomes" an external object for the perceiver; (3) when this distortion is less dominating, the other person to a higher degree remains a separate other--a person in his or her own right. Clinical material illustrates these phenomena, and a graphical picture of the model is presented. Finally with the model as background, the author comments on a selection of phenomena and concepts such as unobjectionable transference, "the third position," mourning and loneliness. The way that the internal colours and distorts the external is of course a central preoccupation of psychoanalysis generally. (Spillius et al., 2011, p. 326)

  8. A graphical user interface for numerical modeling of acclimation responses of vegetation to climate change

    Science.gov (United States)

    Le, Phong V. V.; Kumar, Praveen; Drewry, Darren T.; Quijano, Juan C.

    2012-12-01

    Ecophysiological models that vertically resolve vegetation canopy states are becoming a powerful tool for studying the exchange of mass, energy, and momentum between the land surface and the atmosphere. A mechanistic multilayer canopy-soil-root system model (MLCan) developed by Drewry et al. (2010a) has been used to capture the emergent vegetation responses to elevated atmospheric CO2 for both C3 and C4 plants under various climate conditions. However, processing input data and setting up such a model can be time-consuming and error-prone. In this paper, a graphical user interface that has been developed for MLCan is presented. The design of this interface aims to provide visualization capabilities and interactive support for processing input meteorological forcing data and vegetation parameter values to facilitate the use of this model. In addition, the interface also provides graphical tools for analyzing the forcing data and simulated numerical results. The model and its interface are both written in the MATLAB programming language. Finally, an application of this model package for capturing the ecohydrological responses of three bioenergy crops (maize, miscanthus, and switchgrass) to local environmental drivers at two different sites in the Midwestern United States is presented.

  9. Object Oriented Modelling and Dynamical Simulation

    DEFF Research Database (Denmark)

    Wagner, Falko Jens; Poulsen, Mikael Zebbelin

    1998-01-01

    This report with appendix describes the work done in master project at DTU.The goal of the project was to develop a concept for simulation of dynamical systems based on object oriented methods.The result was a library of C++-classes, for use when both building componentbased models and when...

  10. Object Oriented Modelling and Dynamical Simulation

    DEFF Research Database (Denmark)

    Wagner, Falko Jens; Poulsen, Mikael Zebbelin

    1998-01-01

    This report with appendix describes the work done in master project at DTU.The goal of the project was to develop a concept for simulation of dynamical systems based on object oriented methods.The result was a library of C++-classes, for use when both building componentbased models and when...... onduction simulation experiments....

  11. A Development Method for Multiagent Simulators Using a Graphical Model Editor

    Science.gov (United States)

    Murakami, Masatoshi; Maruo, Tomoaki; Matsumoto, Keinosuke; Mori, Naoki

    A multiagent simulator (MAS) attracts attention as an approach to analyze social phenomena and complex systems in recent years. In addition, many frameworks for developing MAS are also proposed. These frameworks make the amount of development work reduce. But it is necessary to build models that are required to develop simulators from scratch. It becomes a burden to developers. These models would be specialized in the frameworks and lack in reusability. To solve these problems, this paper proposes a graphical model editor that can diagrammatically build models and a simulator development method using the editor. Saving models in a general-purpose form, these models are applicable to various frameworks. Numerical experiments show that the proposed method is effective in MAS development.

  12. A MODEL FOR AERODYNAMICAL DATA STRUCTURE; GRAPHICAL INTERFACE AND USER’S FACILITIES

    Directory of Open Access Journals (Sweden)

    Nicolae APOSTOLESCU

    2010-03-01

    Full Text Available This model defines the structure and applicability of aero dynamical complex data (up to four dimensions without flap influence which is separate considered in steady states calculus for different configurations and optional cases (ground effect, asymmetrical propulsion (one of engines out as well as in dynamic simulations.They user is offered many facilities of data entry, correction and graphical view. The punctual values of dimensional parameters for each coefficient are automaticaly checked for the string strictly increased or decreased specific feature.

  13. [Systematization and hygienic standardization of environmental factors on the basis of common graphic models].

    Science.gov (United States)

    Galkin, A A

    2012-01-01

    On the basis of graphic models of the human response to environmental factors, two main types of complex quantitative influence as well as interrelation between determined effects at the level of an individual, and stochastic effects on population were revealed. Two main kinds of factors have been suggested to be distinguished. They are essential factors and accidental factors. The essential factors are common for environment. The accidental factors are foreign for environment. The above two kinds are different in approaches of hygienic standardization Accidental factors need a dot-like approach, whereas a two-level range approach is suitable for the essential factors.

  14. Enhanced Approximated SURF Model For Object Recognition

    Directory of Open Access Journals (Sweden)

    S. Sangeetha

    2014-02-01

    Full Text Available Computer vision applications like camera calibration, 3D reconstruction, and object recognition and image registration are becoming widely popular now a day. In this paper an enhanced model for speeded up robust features (SURF is proposed by which the object recognition process will become three times faster than common SURF model The main idea is to use efficient data structures for both, the detector and the descriptor. The detection of interest regions is considerably speed-up by using an integral image for scale space computation. The descriptor which is based on orientation histograms is accelerated by the use of an integral orientation histogram. We present an analysis of the computational costs comparing both parts of our approach to the conventional method. Extensive experiments show a speed-up by a factor of eight while the matching and repeatability performance is decreased only slightly.

  15. Modeling business objects with XML schema

    CERN Document Server

    Daum, Berthold

    2003-01-01

    XML Schema is the new language standard from the W3C and the new foundation for defining data in Web-based systems. There is a wealth of information available about Schemas but very little understanding of how to use this highly formal specification for creating documents. Grasping the power of Schemas means going back to the basics of documents themselves, and the semantic rules, or grammars, that define them. Written for schema designers, system architects, programmers, and document authors, Modeling Business Objects with XML Schema guides you through understanding Schemas from the basic concepts, type systems, type derivation, inheritance, namespace handling, through advanced concepts in schema design.*Reviews basic XML syntax and the Schema recommendation in detail.*Builds a knowledge base model step by step (about jazz music) that is used throughout the book.*Discusses Schema design in large environments, best practice design patterns, and Schema''s relation to object-oriented concepts.

  16. Optimizing ion channel models using a parallel genetic algorithm on graphical processors.

    Science.gov (United States)

    Ben-Shalom, Roy; Aviv, Amit; Razon, Benjamin; Korngreen, Alon

    2012-01-01

    We have recently shown that we can semi-automatically constrain models of voltage-gated ion channels by combining a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols. Although numerically successful, this approach is highly demanding computationally, with optimization on a high performance Linux cluster typically lasting several days. To solve this computational bottleneck we converted our optimization algorithm for work on a graphical processing unit (GPU) using NVIDIA's CUDA. Parallelizing the process on a Fermi graphic computing engine from NVIDIA increased the speed ∼180 times over an application running on an 80 node Linux cluster, considerably reducing simulation times. This application allows users to optimize models for ion channel kinetics on a single, inexpensive, desktop "super computer," greatly reducing the time and cost of building models relevant to neuronal physiology. We also demonstrate that the point of algorithm parallelization is crucial to its performance. We substantially reduced computing time by solving the ODEs (Ordinary Differential Equations) so as to massively reduce memory transfers to and from the GPU. This approach may be applied to speed up other data intensive applications requiring iterative solutions of ODEs.

  17. FastGGM: An Efficient Algorithm for the Inference of Gaussian Graphical Model in Biological Networks.

    Science.gov (United States)

    Wang, Ting; Ren, Zhao; Ding, Ying; Fang, Zhou; Sun, Zhe; MacDonald, Matthew L; Sweet, Robert A; Wang, Jieru; Chen, Wei

    2016-02-01

    Biological networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single variables. Gaussian graphical model (GGM), a probability model that characterizes the conditional dependence structure of a set of random variables by a graph, has wide applications in the analysis of biological networks, such as inferring interaction or comparing differential networks. However, existing approaches are either not statistically rigorous or are inefficient for high-dimensional data that include tens of thousands of variables for making inference. In this study, we propose an efficient algorithm to implement the estimation of GGM and obtain p-value and confidence interval for each edge in the graph, based on a recent proposal by Ren et al., 2015. Through simulation studies, we demonstrate that the algorithm is faster by several orders of magnitude than the current implemented algorithm for Ren et al. without losing any accuracy. Then, we apply our algorithm to two real data sets: transcriptomic data from a study of childhood asthma and proteomic data from a study of Alzheimer's disease. We estimate the global gene or protein interaction networks for the disease and healthy samples. The resulting networks reveal interesting interactions and the differential networks between cases and controls show functional relevance to the diseases. In conclusion, we provide a computationally fast algorithm to implement a statistically sound procedure for constructing Gaussian graphical model and making inference with high-dimensional biological data. The algorithm has been implemented in an R package named "FastGGM".

  18. Dynamic object-oriented geospatial modeling

    Directory of Open Access Journals (Sweden)

    Tomáš Richta

    2010-02-01

    Full Text Available Published literature about moving objects (MO simplifies the problem to the representation and storage of moving points, moving lines, or moving regions. The main insufficiency of this approach is lack of MO inner structure and dynamics modeling – the autonomy of moving agent. This paper describes basics of the object-oriented geospatial methodology for modeling complex systems consisting of agents, which move within spatial environment. The main idea is that during the agent movement, different kinds of connections with other moving or stationary objects are established or disposed, based on some spatial constraint satisfaction or nonfulfilment respectively. The methodology is constructed with regard to following two main conditions – 1 the inner behavior of agents should be represented by any formalism, e.g.  Petri net, finite state machine, etc., and 2 the spatial characteristic of environment should be supplied by any information system, that is able to store defined set of spatial types, and support defined set of spatial operations. Finally, the methodology is demonstrated on simple simulation model of tram transportation system.

  19. Modified graphical autocatalytic set model of combustion process in circulating fluidized bed boiler

    Science.gov (United States)

    Yusof, Nurul Syazwani; Bakar, Sumarni Abu; Ismail, Razidah

    2014-07-01

    Circulating Fluidized Bed Boiler (CFB) is a device for generating steam by burning fossil fuels in a furnace operating under a special hydrodynamic condition. Autocatalytic Set has provided a graphical model of chemical reactions that occurred during combustion process in CFB. Eight important chemical substances known as species were represented as nodes and catalytic relationships between nodes are represented by the edges in the graph. In this paper, the model is extended and modified by considering other relevant chemical reactions that also exist during the process. Catalytic relationship among the species in the model is discussed. The result reveals that the modified model is able to gives more explanation of the relationship among the species during the process at initial time t.

  20. Utero-fetal unit and pregnant woman modeling using a computer graphics approach for dosimetry studies.

    Science.gov (United States)

    Anquez, Jérémie; Boubekeur, Tamy; Bibin, Lazar; Angelini, Elsa; Bloch, Isabelle

    2009-01-01

    Potential sanitary effects related to electromagnetic fields exposure raise public concerns, especially for fetuses during pregnancy. Human fetus exposure can only be assessed through simulated dosimetry studies, performed on anthropomorphic models of pregnant women. In this paper, we propose a new methodology to generate a set of detailed utero-fetal unit (UFU) 3D models during the first and third trimesters of pregnancy, based on segmented 3D ultrasound and MRI data. UFU models are built using recent geometry processing methods derived from mesh-based computer graphics techniques and embedded in a synthetic woman body. Nine pregnant woman models have been generated using this approach and validated by obstetricians, for anatomical accuracy and representativeness.

  1. AZOrange - High performance open source machine learning for QSAR modeling in a graphical programming environment

    Directory of Open Access Journals (Sweden)

    Stålring Jonna C

    2011-07-01

    Full Text Available Abstract Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the

  2. Objective calibration of numerical weather prediction models

    Science.gov (United States)

    Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.

    2017-07-01

    Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.

  3. Learning object models from few examples

    Science.gov (United States)

    Misra, Ishan; Wang, Yuxiong; Hebert, Martial

    2016-05-01

    Current computer vision systems rely primarily on fixed models learned in a supervised fashion, i.e., with extensive manually labelled data. This is appropriate in scenarios in which the information about all the possible visual queries can be anticipated in advance, but it does not scale to scenarios in which new objects need to be added during the operation of the system, as in dynamic interaction with UGVs. For example, the user might have found a new type of object of interest, e.g., a particular vehicle, which needs to be added to the system right away. The supervised approach is not practical to acquire extensive data and to annotate it. In this paper, we describe techniques for rapidly updating or creating models using sparsely labelled data. The techniques address scenarios in which only a few annotated training samples are available and need to be used to generate models suitable for recognition. These approaches are crucial for on-the-fly insertion of models by users and on-line learning.

  4. Moving object detection using keypoints reference model

    Directory of Open Access Journals (Sweden)

    Wan Zaki Wan Mimi Diyana

    2011-01-01

    Full Text Available Abstract This article presents a new method for background subtraction (BGS and object detection for a real-time video application using a combination of frame differencing and a scale-invariant feature detector. This method takes the benefits of background modelling and the invariant feature detector to improve the accuracy in various environments. The proposed method consists of three main modules, namely, modelling, matching and subtraction modules. The comparison study of the proposed method with a popular Gaussian mixture model proved that the improvement in correct classification can be increased up to 98% with a reduction of false negative and true positive rates. Beside that the proposed method has shown great potential to overcome the drawback of the traditional BGS in handling challenges like shadow effect and lighting fluctuation.

  5. R graphics

    CERN Document Server

    Murrell, Paul

    2005-01-01

    R is revolutionizing the world of statistical computing. Powerful, flexible, and best of all free, R is now the program of choice for tens of thousands of statisticians. Destined to become an instant classic, R Graphics presents the first complete, authoritative exposition on the R graphical system. Paul Murrell, widely known as the leading expert on R graphics, has developed an in-depth resource that takes nothing for granted and helps both neophyte and seasoned users master the intricacies of R graphics. After an introductory overview of R graphics facilities, the presentation first focuses

  6. Cancer genomics object model: an object model for multiple functional genomics data for cancer research.

    Science.gov (United States)

    Park, Yu Rang; Lee, Hye Won; Cho, Sung Bum; Kim, Ju Han

    2007-01-01

    The development of functional genomics including transcriptomics, proteomics and metabolomics allow us to monitor a large number of key cellular pathways simultaneously. Several technology-specific data models have been introduced for the representation of functional genomics experimental data, including the MicroArray Gene Expression-Object Model (MAGE-OM), the Proteomics Experiment Data Repository (PEDRo), and the Tissue MicroArray-Object Model (TMA-OM). Despite the increasing number of cancer studies using multiple functional genomics technologies, there is still no integrated data model for multiple functional genomics experimental and clinical data. We propose an object-oriented data model for cancer genomics research, Cancer Genomics Object Model (CaGe-OM). We reference four data models: Functional Genomic-Object Model, MAGE-OM, TMAOM and PEDRo. The clinical and histopathological information models are created by analyzing cancer management workflow and referencing the College of American Pathology Cancer Protocols and National Cancer Institute Common Data Elements. The CaGe-OM provides a comprehensive data model for integrated storage and analysis of clinical and multiple functional genomics data.

  7. Bilingual Object Naming: A Connectionist Model

    Science.gov (United States)

    Fang, Shin-Yi; Zinszer, Benjamin D.; Malt, Barbara C.; Li, Ping

    2016-01-01

    Patterns of object naming often differ between languages, but bilingual speakers develop convergent naming patterns in their two languages that are distinct from those of monolingual speakers of each language. This convergence appears to reflect interactions between lexical representations for the two languages. In this study, we developed a self-organizing connectionist model to simulate semantic convergence in the bilingual lexicon and investigate the mechanisms underlying this semantic convergence. We examined the similarity of patterns in the simulated data to empirical data from past research, and we identified how semantic convergence was manifested in the simulated bilingual lexical knowledge. Furthermore, we created impaired models in which components of the network were removed so as to examine the importance of the relevant components on bilingual object naming. Our results demonstrate that connections between two languages’ lexicons can be established through the simultaneous activations of related words in the two languages. These connections between languages allow the outputs of their lexicons to become more similar, that is, to converge. Our model provides a basis for future computational studies of how various input variables may affect bilingual naming patterns. PMID:27242575

  8. Mathematical structures for computer graphics

    CERN Document Server

    Janke, Steven J

    2014-01-01

    A comprehensive exploration of the mathematics behind the modeling and rendering of computer graphics scenes Mathematical Structures for Computer Graphics presents an accessible and intuitive approach to the mathematical ideas and techniques necessary for two- and three-dimensional computer graphics. Focusing on the significant mathematical results, the book establishes key algorithms used to build complex graphics scenes. Written for readers with various levels of mathematical background, the book develops a solid foundation for graphics techniques and fills in relevant grap

  9. MAGE (M-file/Mif Automatic GEnerator): A graphical interface tool for automatic generation of Object Oriented Micromagnetic Framework configuration files and Matlab scripts for results analysis

    Science.gov (United States)

    Chęciński, Jakub; Frankowski, Marek

    2016-10-01

    We present a tool for fully-automated generation of both simulations configuration files (Mif) and Matlab scripts for automated data analysis, dedicated for Object Oriented Micromagnetic Framework (OOMMF). We introduce extended graphical user interface (GUI) that allows for fast, error-proof and easy creation of Mifs, without any programming skills usually required for manual Mif writing necessary. With MAGE we provide OOMMF extensions for complementing it by mangetoresistance and spin-transfer-torque calculations, as well as local magnetization data selection for output. Our software allows for creation of advanced simulations conditions like simultaneous parameters sweeps and synchronic excitation application. Furthermore, since output of such simulation could be long and complicated we provide another GUI allowing for automated creation of Matlab scripts suitable for analysis of such data with Fourier and wavelet transforms as well as user-defined operations.

  10. COGNITIVE GRAPHICS AND SEMANTIC MODELING FOR GEOSPATIAL SOLUTIONS IN ENERGY SECTOR

    Directory of Open Access Journals (Sweden)

    L. V. Massel

    2015-01-01

    Full Text Available The author team proposes the integration of mathematical and semantic modeling and visual analytics techniques, including the use of geographic information technologies, to solve geospatial problems. We con-siderthese types ofsemantic modelingasontological, cognitive, probability and event simulation. It is shown that-graphicsemantic modelshave properties of cognitive graphics. TraditionalGIS andgeo-toolsprovidingopportunities of 3D-geovizualization are compared.The author tool of 3D-geovizualization named Geocomponentandits application to solving of geospatialenergyproblems are described. We consider the integration of semantic modeling and 3D-geovizualization providing situational awareness of the researcher, and, as a result, the expansion of visual analytics opportunities and their using to solve geospatial problems of management in the energy sector.

  11. Quantum Chemistry for Solvated Molecules on Graphical Processing Units (GPUs)using Polarizable Continuum Models

    CERN Document Server

    Liu, Fang; Kulik, Heather J; Martínez, Todd J

    2015-01-01

    The conductor-like polarization model (C-PCM) with switching/Gaussian smooth discretization is a widely used implicit solvation model in chemical simulations. However, its application in quantum mechanical calculations of large-scale biomolecular systems can be limited by computational expense of both the gas phase electronic structure and the solvation interaction. We have previously used graphical processing units (GPUs) to accelerate the first of these steps. Here, we extend the use of GPUs to accelerate electronic structure calculations including C-PCM solvation. Implementation on the GPU leads to significant acceleration of the generation of the required integrals for C-PCM. We further propose two strategies to improve the solution of the required linear equations: a dynamic convergence threshold and a randomized block-Jacobi preconditioner. These strategies are not specific to GPUs and are expected to be beneficial for both CPU and GPU implementations. We benchmark the performance of the new implementat...

  12. Fertility intentions and outcomes: Implementing the Theory of Planned Behavior with graphical models.

    Science.gov (United States)

    Mencarini, Letizia; Vignoli, Daniele; Gottard, Anna

    2015-03-01

    This paper studies fertility intentions and their outcomes, analyzing the complete path leading to fertility behavior according to the social psychological model of Theory Planned Behavior (TPB). We move beyond existing research using graphical models to have a precise understanding, and a formal description, of the developmental fertility decision-making process. Our findings yield new results for the Italian case which are empirically robust and theoretically coherent, adding important insights to the effectiveness of the TPB for fertility research. In line with TPB, all intentions' primary antecedents are found to be determinants of the level of fertility intentions, but do not affect fertility outcomes, being pre-filtered by fertility intentions. Nevertheless, in contrast with TPB, background factors are not fully mediated by intentions' primary antecedents, influencing directly fertility intentions and even fertility behaviors.

  13. Partial Optimality by Pruning for MAP-Inference with General Graphical Models.

    Science.gov (United States)

    Swoboda, Paul; Shekhovtsov, Alexander; Kappes, Jorg Hendrik; Schnorr, Christoph; Savchynskyy, Bogdan

    2016-07-01

    We consider the energy minimization problem for undirected graphical models, also known as MAP-inference problem for Markov random fields which is NP-hard in general. We propose a novel polynomial time algorithm to obtain a part of its optimal non-relaxed integral solution. Our algorithm is initialized with variables taking integral values in the solution of a convex relaxation of the MAP-inference problem and iteratively prunes those, which do not satisfy our criterion for partial optimality. We show that our pruning strategy is in a certain sense theoretically optimal. Also empirically our method outperforms previous approaches in terms of the number of persistently labelled variables. The method is very general, as it is applicable to models with arbitrary factors of an arbitrary order and can employ any solver for the considered relaxed problem. Our method's runtime is determined by the runtime of the convex relaxation solver for the MAP-inference problem.

  14. ASAMgpu V1.0 - a moist fully compressible atmospheric model using graphics processing units (GPUs)

    Science.gov (United States)

    Horn, S.

    2012-03-01

    In this work the three dimensional compressible moist atmospheric model ASAMgpu is presented. The calculations are done using graphics processing units (GPUs). To ensure platform independence OpenGL and GLSL are used, with that the model runs on any hardware supporting fragment shaders. The MPICH2 library enables interprocess communication allowing the usage of more than one GPU through domain decomposition. Time integration is done with an explicit three step Runge-Kutta scheme with a time-splitting algorithm for the acoustic waves. The results for four test cases are shown in this paper. A rising dry heat bubble, a cold bubble induced density flow, a rising moist heat bubble in a saturated environment, and a DYCOMS-II case.

  15. A Knowledge Modeling Method for Computer Graphics Design & Production Based on Ontology

    Directory of Open Access Journals (Sweden)

    Chen Tong

    2017-01-01

    Full Text Available As one of the most critical stages of CG (Computer Graphics industry, CG design & production needs the support of professional knowledge and practice experience of multidisciplinary. With the outstanding performance in knowledge sharing, integration and reuse, knowledge modeling could increase greatly the efficiency, reduce the cost and avoid repeated error in CG design & production. However, knowledge modeling of CG design & production differs greatly from those of other fields. On the one hand, it is similar to physical product design, which involves great deal of tacit knowledge such as modeling skills, reasoning knowledge and so on. On the other hand, as film, CG design & production needs a lot of unstructured description information. The heterogeneity between physical product and film makes knowledge modeling more complicated. Thus a systematic knowledge modelling method based on Ontology is proposed to aid CG design & production in this paper. CG animation knowledge is capture and organized from viewpoint of three aspects: requirements and design and production. The knowledge are categorized into static and dynamic knowledge, and Ontology is adopted to construct a hierarchic model to organize the knowledge, so as to offer a uniform communication semantic foundations for designers from different fields. Based on animation script, the CG design task model is proposed to drive the organization and management of different knowledge involved in CG design & production. Finally, we apply this method in the knowledge modeling of naked-eye animation design and production to illustrate effectiveness of this method.

  16. Graphics development of DCOR: Deterministic combat model of Oak Ridge. [Deterministic Combat model of Oak Ridge (DCOR)

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, G. (Georgia Inst. of Tech., Atlanta, GA (United States)); Azmy, Y.Y. (Oak Ridge National Lab., TN (United States))

    1992-10-01

    DCOR is a user-friendly computer implementation of a deterministic combat model developed at ORNL. To make the interpretation of the results more intuitive, a conversion of the numerical solution to a graphic animation sequence of battle evolution is desirable. DCOR uses a coarse computational spatial mesh superimposed on the battlefield. This research is aimed at developing robust methods for computing the position of the combative units over the continuum (and also pixeled) battlefield, from DCOR's discrete-variable solution representing the density of each force type evaluated at gridpoints. Three main problems have been identified and solutions have been devised and implemented in a new visualization module of DCOR. First, there is the problem of distributing the total number of objects, each representing a combative unit of each force type, among the gridpoints at each time level of the animation. This problem is solved by distributing, for each force type, the total number of combative units, one by one, to the gridpoint with the largest calculated number of units. Second, there is the problem of distributing the number of units assigned to each computational gridpoint over the battlefield area attributed to that point. This problem is solved by distributing the units within that area by taking into account the influence of surrounding gridpoints using linear interpolation. Finally, time interpolated solutions must be generated to produce a sufficient number of frames to create a smooth animation sequence. Currently, enough frames may be generated either by direct computation via the PDE solver or by using linear programming techniques to linearly interpolate intermediate frames between calculated frames.

  17. FastGGM: An Efficient Algorithm for the Inference of Gaussian Graphical Model in Biological Networks.

    Directory of Open Access Journals (Sweden)

    Ting Wang

    2016-02-01

    Full Text Available Biological networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single variables. Gaussian graphical model (GGM, a probability model that characterizes the conditional dependence structure of a set of random variables by a graph, has wide applications in the analysis of biological networks, such as inferring interaction or comparing differential networks. However, existing approaches are either not statistically rigorous or are inefficient for high-dimensional data that include tens of thousands of variables for making inference. In this study, we propose an efficient algorithm to implement the estimation of GGM and obtain p-value and confidence interval for each edge in the graph, based on a recent proposal by Ren et al., 2015. Through simulation studies, we demonstrate that the algorithm is faster by several orders of magnitude than the current implemented algorithm for Ren et al. without losing any accuracy. Then, we apply our algorithm to two real data sets: transcriptomic data from a study of childhood asthma and proteomic data from a study of Alzheimer's disease. We estimate the global gene or protein interaction networks for the disease and healthy samples. The resulting networks reveal interesting interactions and the differential networks between cases and controls show functional relevance to the diseases. In conclusion, we provide a computationally fast algorithm to implement a statistically sound procedure for constructing Gaussian graphical model and making inference with high-dimensional biological data. The algorithm has been implemented in an R package named "FastGGM".

  18. A graphical model approach to systematically missing data in meta-analysis of observational studies.

    Science.gov (United States)

    Kovačić, Jelena; Varnai, Veda Marija

    2016-10-30

    When studies in meta-analysis include different sets of confounders, simple analyses can cause a bias (omitting confounders that are missing in certain studies) or precision loss (omitting studies with incomplete confounders, i.e. a complete-case meta-analysis). To overcome these types of issues, a previous study proposed modelling the high correlation between partially and fully adjusted regression coefficient estimates in a bivariate meta-analysis. When multiple differently adjusted regression coefficient estimates are available, we propose exploiting such correlations in a graphical model. Compared with a previously suggested bivariate meta-analysis method, such a graphical model approach is likely to reduce the number of parameters in complex missing data settings by omitting the direct relationships between some of the estimates. We propose a structure-learning rule whose justification relies on the missingness pattern being monotone. This rule was tested using epidemiological data from a multi-centre survey. In the analysis of risk factors for early retirement, the method showed a smaller difference from a complete data odds ratio and greater precision than a commonly used complete-case meta-analysis. Three real-world applications with monotone missing patterns are provided, namely, the association between (1) the fibrinogen level and coronary heart disease, (2) the intima media thickness and vascular risk and (3) allergic asthma and depressive episodes. The proposed method allows for the inclusion of published summary data, which makes it particularly suitable for applications involving both microdata and summary data. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Two graphical user interfaces for managing and analyzing MODFLOW groundwater-model scenarios

    Science.gov (United States)

    Banta, Edward R.

    2014-01-01

    Scenario Manager and Scenario Analyzer are graphical user interfaces that facilitate the use of calibrated, MODFLOW-based groundwater models for investigating possible responses to proposed stresses on a groundwater system. Scenario Manager allows a user, starting with a calibrated model, to design and run model scenarios by adding or modifying stresses simulated by the model. Scenario Analyzer facilitates the process of extracting data from model output and preparing such display elements as maps, charts, and tables. Both programs are designed for users who are familiar with the science on which groundwater modeling is based but who may not have a groundwater modeler’s expertise in building and calibrating a groundwater model from start to finish. With Scenario Manager, the user can manipulate model input to simulate withdrawal or injection wells, time-variant specified hydraulic heads, recharge, and such surface-water features as rivers and canals. Input for stresses to be simulated comes from user-provided geographic information system files and time-series data files. A Scenario Manager project can contain multiple scenarios and is self-documenting. Scenario Analyzer can be used to analyze output from any MODFLOW-based model; it is not limited to use with scenarios generated by Scenario Manager. Model-simulated values of hydraulic head, drawdown, solute concentration, and cell-by-cell flow rates can be presented in display elements. Map data can be represented as lines of equal value (contours) or as a gradated color fill. Charts and tables display time-series data obtained from output generated by a transient-state model run or from user-provided text files of time-series data. A display element can be based entirely on output of a single model run, or, to facilitate comparison of results of multiple scenarios, an element can be based on output from multiple model runs. Scenario Analyzer can export display elements and supporting metadata as a Portable

  20. Graphical representation of life paths to better convey results of decision models to patients.

    Science.gov (United States)

    Rubrichi, Stefania; Rognoni, Carla; Sacchi, Lucia; Parimbelli, Enea; Napolitano, Carlo; Mazzanti, Andrea; Quaglini, Silvana

    2015-04-01

    The inclusion of patients' perspectives in clinical practice has become an important matter for health professionals, in view of the increasing attention to patient-centered care. In this regard, this report illustrates a method for developing a visual aid that supports the physician in the process of informing patients about a critical decisional problem. In particular, we focused on interpretation of the results of decision trees embedding Markov models implemented with the commercial tool TreeAge Pro. Starting from patient-level simulations and exploiting some advanced functionalities of TreeAge Pro, we combined results to produce a novel graphical output that represents the distributions of outcomes over the lifetime for the different decision options, thus becoming a more informative decision support in a context of shared decision making. The training example used to illustrate the method is a decision tree for thromboembolism risk prevention in patients with nonvalvular atrial fibrillation.

  1. Robust Depth Image Acquisition Using Modulated Pattern Projection and Probabilistic Graphical Models

    Directory of Open Access Journals (Sweden)

    Jaka Kravanja

    2016-10-01

    Full Text Available Depth image acquisition with structured light approaches in outdoor environments is a challenging problem due to external factors, such as ambient sunlight, which commonly affect the acquisition procedure. This paper presents a novel structured light sensor designed specifically for operation in outdoor environments. The sensor exploits a modulated sequence of structured light projected onto the target scene to counteract environmental factors and estimate a spatial distortion map in a robust manner. The correspondence between the projected pattern and the estimated distortion map is then established using a probabilistic framework based on graphical models. Finally, the depth image of the target scene is reconstructed using a number of reference frames recorded during the calibration process. We evaluate the proposed sensor on experimental data in indoor and outdoor environments and present comparative experiments with other existing methods, as well as commercial sensors.

  2. uPy: a ubiquitous computer graphics Python API with Biological Modeling Applications

    Science.gov (United States)

    Autin, L.; Johnson, G.; Hake, J.; Olson, A.; Sanner, M.

    2015-01-01

    In this paper we describe uPy, an extension module for the Python programming language that provides a uniform abstraction of the APIs of several 3D computer graphics programs called hosts, including: Blender, Maya, Cinema4D, and DejaVu. A plugin written with uPy is a unique piece of code that will run in all uPy-supported hosts. We demonstrate the creation of complex plug-ins for molecular/cellular modeling and visualization and discuss how uPy can more generally simplify programming for many types of projects (not solely science applications) intended for multi-host distribution. uPy is available at http://upy.scripps.edu PMID:24806987

  3. Graphics Processing Unit (GPU) Acceleration of the Goddard Earth Observing System Atmospheric Model

    Science.gov (United States)

    Putnam, Williama

    2011-01-01

    The Goddard Earth Observing System 5 (GEOS-5) is the atmospheric model used by the Global Modeling and Assimilation Office (GMAO) for a variety of applications, from long-term climate prediction at relatively coarse resolution, to data assimilation and numerical weather prediction, to very high-resolution cloud-resolving simulations. GEOS-5 is being ported to a graphics processing unit (GPU) cluster at the NASA Center for Climate Simulation (NCCS). By utilizing GPU co-processor technology, we expect to increase the throughput of GEOS-5 by at least an order of magnitude, and accelerate the process of scientific exploration across all scales of global modeling, including: The large-scale, high-end application of non-hydrostatic, global, cloud-resolving modeling at 10- to I-kilometer (km) global resolutions Intermediate-resolution seasonal climate and weather prediction at 50- to 25-km on small clusters of GPUs Long-range, coarse-resolution climate modeling, enabled on a small box of GPUs for the individual researcher After being ported to the GPU cluster, the primary physics components and the dynamical core of GEOS-5 have demonstrated a potential speedup of 15-40 times over conventional processor cores. Performance improvements of this magnitude reduce the required scalability of 1-km, global, cloud-resolving models from an unfathomable 6 million cores to an attainable 200,000 GPU-enabled cores.

  4. A Graphical User Interface for Parameterizing Biochemical Models of Photosynthesis and Chlorophyll Fluorescence

    Science.gov (United States)

    Kornfeld, A.; Van der Tol, C.; Berry, J. A.

    2015-12-01

    Recent advances in optical remote sensing of photosynthesis offer great promise for estimating gross primary productivity (GPP) at leaf, canopy and even global scale. These methods -including solar-induced chlorophyll fluorescence (SIF) emission, fluorescence spectra, and hyperspectral features such as the red edge and the photochemical reflectance index (PRI) - can be used to greatly enhance the predictive power of global circulation models (GCMs) by providing better constraints on GPP. The way to use measured optical data to parameterize existing models such as SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) is not trivial, however. We have therefore extended a biochemical model to include fluorescence and other parameters in a coupled treatment. To help parameterize the model, we then use nonlinear curve-fitting routines to determine the parameter set that enables model results to best fit leaf-level gas exchange and optical data measurements. To make the tool more accessible to all practitioners, we have further designed a graphical user interface (GUI) based front-end to allow researchers to analyze data with a minimum of effort while, at the same time, allowing them to change parameters interactively to visualize how variation in model parameters affect predicted outcomes such as photosynthetic rates, electron transport, and chlorophyll fluorescence. Here we discuss the tool and its effectiveness, using recently-gathered leaf-level data.

  5. NATURAL graphics

    Science.gov (United States)

    Jones, R. H.

    1984-01-01

    The hardware and software developments in computer graphics are discussed. Major topics include: system capabilities, hardware design, system compatibility, and software interface with the data base management system.

  6. Lunar-Forming Giant Impact Model Utilizing Modern Graphics Processing Units

    Indian Academy of Sciences (India)

    J. C. Eiland; T. C. Salzillo; B. H. Hokr; J. L. Highland; W. D. Mayfield; B. M. Wyatt

    2014-12-01

    Recent giant impact models focus on producing a circumplanetary disk of the proper composition around the Earth and defer to earlier works for the accretion of this disk into the Moon. The discontinuity between creating the circumplanetary disk and accretion of the Moon is unnatural and lacks simplicity. In addition, current giant impact theories are being questioned due to their inability to find conditions that will produce a system with both the proper angular momentum and a resultant Moon that is isotopically similar to the Earth. Here we return to first principles and produce a continuous model that can be used to rapidly search the vast impact parameter space to identify plausible initial conditions. This is accomplished by focusing on the three major components of planetary collisions: constant gravitational attraction, short range repulsion and energy transfer. The structure of this model makes it easily parallelizable and well-suited to harness the power of modern Graphics Processing Units (GPUs). The model makes clear the physically relevant processes, and allows a physical picture to naturally develop. We conclude by demonstrating how the model readily produces stable Earth–Moon systems from a single, continuous simulation. The resultant systems possess many desired characteristics such as an iron-deficient, heterogeneously-mixed Moon and accurate axial tilt of the Earth.

  7. Developing a multiscale, multi-resolution agent-based brain tumor model by graphics processing units

    Directory of Open Access Journals (Sweden)

    Zhang Le

    2011-12-01

    Full Text Available Abstract Multiscale agent-based modeling (MABM has been widely used to simulate Glioblastoma Multiforme (GBM and its progression. At the intracellular level, the MABM approach employs a system of ordinary differential equations to describe quantitatively specific intracellular molecular pathways that determine phenotypic switches among cells (e.g. from migration to proliferation and vice versa. At the intercellular level, MABM describes cell-cell interactions by a discrete module. At the tissue level, partial differential equations are employed to model the diffusion of chemoattractants, which are the input factors of the intracellular molecular pathway. Moreover, multiscale analysis makes it possible to explore the molecules that play important roles in determining the cellular phenotypic switches that in turn drive the whole GBM expansion. However, owing to limited computational resources, MABM is currently a theoretical biological model that uses relatively coarse grids to simulate a few cancer cells in a small slice of brain cancer tissue. In order to improve this theoretical model to simulate and predict actual GBM cancer progression in real time, a graphics processing unit (GPU-based parallel computing algorithm was developed and combined with the multi-resolution design to speed up the MABM. The simulated results demonstrated that the GPU-based, multi-resolution and multiscale approach can accelerate the previous MABM around 30-fold with relatively fine grids in a large extracellular matrix. Therefore, the new model has great potential for simulating and predicting real-time GBM progression, if real experimental data are incorporated.

  8. DINAMO: a coupled sequence alignment editor/molecular graphics tool for interactive homology modeling of proteins.

    Science.gov (United States)

    Hansen, M; Bentz, J; Baucom, A; Gregoret, L

    1998-01-01

    Gaining functional information about a novel protein is a universal problem in biomedical research. With the explosive growth of the protein sequence and structural databases, it is becoming increasingly common for researchers to attempt to build a three-dimensional model of their protein of interest in order to gain information about its structure and interactions with other molecules. The two most reliable methods for predicting the structure of a protein are homology modeling, in which the novel sequence is modeled on the known three-dimensional structure of a related protein, and fold recognition (threading), where the sequence is scored against a library of fold models, and the highest scoring model is selected. The sequence alignment to a known structure can be ambiguous, and human intervention is often required to optimize the model. We describe an interactive model building and assessment tool in which a sequence alignment editor is dynamically coupled to a molecular graphics display. By means of a set of assessment tools, the user may optimize his or her alignment to satisfy the known heuristics of protein structure. Adjustments to the sequence alignment made by the user are reflected in the displayed model by color and other visual cues. For instance, residues are colored by hydrophobicity in both the three-dimensional model and in the sequence alignment. This aids the user in identifying undesirable buried polar residues. Several different evaluation metrics may be selected including residue conservation, residue properties, and visualization of predicted secondary structure. These characteristics may be mapped to the model both singly and in combination. DINAMO is a Java-based tool that may be run either over the web or installed locally. Its modular architecture also allows Java-literate users to add plug-ins of their own design.

  9. A Curriculum Model: Engineering Design Graphics Course Updates Based on Industrial and Academic Institution Requirements

    Science.gov (United States)

    Meznarich, R. A.; Shava, R. C.; Lightner, S. L.

    2009-01-01

    Engineering design graphics courses taught in colleges or universities should provide and equip students preparing for employment with the basic occupational graphics skill competences required by engineering and technology disciplines. Academic institutions should introduce and include topics that cover the newer and more efficient graphics…

  10. A Curriculum Model: Engineering Design Graphics Course Updates Based on Industrial and Academic Institution Requirements

    Science.gov (United States)

    Meznarich, R. A.; Shava, R. C.; Lightner, S. L.

    2009-01-01

    Engineering design graphics courses taught in colleges or universities should provide and equip students preparing for employment with the basic occupational graphics skill competences required by engineering and technology disciplines. Academic institutions should introduce and include topics that cover the newer and more efficient graphics…

  11. Inferring Caravaggio's studio lighting and praxis in The calling of St. Matthew by computer graphics modeling

    Science.gov (United States)

    Stork, David G.; Nagy, Gabor

    2010-02-01

    We explored the working methods of the Italian Baroque master Caravaggio through computer graphics reconstruction of his studio, with special focus on his use of lighting and illumination in The calling of St. Matthew. Although he surely took artistic liberties while constructing this and other works and did not strive to provide a "photographic" rendering of the tableau before him, there are nevertheless numerous visual clues to the likely studio conditions and working methods within the painting: the falloff of brightness along the rear wall, the relative brightness of the faces of figures, and the variation in sharpness of cast shadows (i.e., umbrae and penumbrae). We explored two studio lighting hypotheses: that the primary illumination was local (and hence artificial) and that it was distant solar. We find that the visual evidence can be consistent with local (artificial) illumination if Caravaggio painted his figures separately, adjusting the brightness on each to compensate for the falloff in illumination. Alternatively, the evidence is consistent with solar illumination only if the rear wall had particular reflectance properties, as described by a bi-directional reflectance distribution function, BRDF. (Ours is the first research applying computer graphics to the understanding of artists' praxis that models subtle reflectance properties of surfaces through BRDFs, a technique that may find use in studies of other artists.) A somewhat puzzling visual feature-unnoted in the scholarly literature-is the upward-slanting cast shadow in the upper-right corner of the painting. We found this shadow is naturally consistent with a local illuminant passing through a small window perpendicular to the viewer's line of sight, but could also be consistent with solar illumination if the shadow was due to a slanted, overhanging section of a roof outside the artist's studio. Our results place likely conditions upon any hypotheses concerning Caravaggio's working methods and

  12. Graphic Storytelling

    Science.gov (United States)

    Thompson, John

    2009-01-01

    Graphic storytelling is a medium that allows students to make and share stories, while developing their art communication skills. American comics today are more varied in genre, approach, and audience than ever before. When considering the impact of Japanese manga on the youth, graphic storytelling emerges as a powerful player in pop culture. In…

  13. Graphic Storytelling

    Science.gov (United States)

    Thompson, John

    2009-01-01

    Graphic storytelling is a medium that allows students to make and share stories, while developing their art communication skills. American comics today are more varied in genre, approach, and audience than ever before. When considering the impact of Japanese manga on the youth, graphic storytelling emerges as a powerful player in pop culture. In…

  14. A computer graphical user interface for survival mixture modelling of recurrent infections.

    Science.gov (United States)

    Lee, Andy H; Zhao, Yun; Yau, Kelvin K W; Ng, S K

    2009-03-01

    Recurrent infections data are commonly encountered in medical research, where the recurrent events are characterised by an acute phase followed by a stable phase after the index episode. Two-component survival mixture models, in both proportional hazards and accelerated failure time settings, are presented as a flexible method of analysing such data. To account for the inherent dependency of the recurrent observations, random effects are incorporated within the conditional hazard function, in the manner of generalised linear mixed models. Assuming a Weibull or log-logistic baseline hazard in both mixture components of the survival mixture model, an EM algorithm is developed for the residual maximum quasi-likelihood estimation of fixed effect and variance component parameters. The methodology is implemented as a graphical user interface coded using Microsoft visual C++. Application to model recurrent urinary tract infections for elderly women is illustrated, where significant individual variations are evident at both acute and stable phases. The survival mixture methodology developed enable practitioners to identify pertinent risk factors affecting the recurrent times and to draw valid conclusions inferred from these correlated and heterogeneous survival data.

  15. A structured and object oriented approach to training system modeling

    OpenAIRE

    Malysheva Elena Yuryevna; Bobrovsky Sergey Michailovich

    2015-01-01

    Structured Analysis and Object Oriented Analysis are widely adopted for system modelling. The article describes the examples of university training system modeling as examples of structured modeling and object-oriented modeling.

  16. Exploratory graphical models of functional and structural connectivity patterns for Alzheimer's Disease diagnosis

    Directory of Open Access Journals (Sweden)

    Andres eOrtiz

    2015-11-01

    Full Text Available Alzheimer’s Disease (AD is the most common neurodegenerative disease in elderly people. Itsdevelopment has been shown to be closely related to changes in the brain connectivity networkand in the brain activation patterns along with structural changes caused by the neurodegenerativeprocess.Methods to infer dependence between brain regions are usually derived from the analysis ofcovariance between activation levels in the different areas. However, these covariance-basedmethods are not able to estimate conditional independence between variables to factor out theinfluence of other regions. Conversely, models based on the inverse covariance, or precisionmatrix, such as Sparse Gaussian Graphical Models allow revealing conditional independencebetween regions by estimating the covariance between two variables given the rest as constant.This paper uses Sparse Inverse Covariance Estimation (SICE methods to learn undirectedgraphs in order to derive functional and structural connectivity patterns from Fludeoxyglucose(18F-FDG Position Emission Tomography (PET data and segmented Magnetic Resonanceimages (MRI, drawn from the ADNI database, for Control, MCI (Mild Cognitive ImpairmentSubjects and AD subjects. Sparse computation fits perfectly here as brain regions usually onlyinteract with a few other areas.The models clearly show different metabolic covariation patters between subject groups, revealingthe loss of strong connections in AD and MCI subjects when compared to Controls. Similarly,the variance between GM (Grey Matter densities of different regions reveals different structuralcovariation patterns between the different groups. Thus, the different connectivity patterns forcontrols and AD are used in this paper to select regions of interest in PET and GM images withdiscriminative power for early AD diagnosis. Finally, functional an structural models are combinedto leverage the classification accuracy.The results obtained in this work show the usefulness

  17. Exploratory graphical models of functional and structural connectivity patterns for Alzheimer's Disease diagnosis

    Science.gov (United States)

    Ortiz, Andrés; Munilla, Jorge; Álvarez-Illán, Ignacio; Górriz, Juan M.; Ramírez, Javier

    2015-01-01

    Alzheimer's Disease (AD) is the most common neurodegenerative disease in elderly people. Its development has been shown to be closely related to changes in the brain connectivity network and in the brain activation patterns along with structural changes caused by the neurodegenerative process. Methods to infer dependence between brain regions are usually derived from the analysis of covariance between activation levels in the different areas. However, these covariance-based methods are not able to estimate conditional independence between variables to factor out the influence of other regions. Conversely, models based on the inverse covariance, or precision matrix, such as Sparse Gaussian Graphical Models allow revealing conditional independence between regions by estimating the covariance between two variables given the rest as constant. This paper uses Sparse Inverse Covariance Estimation (SICE) methods to learn undirected graphs in order to derive functional and structural connectivity patterns from Fludeoxyglucose (18F-FDG) Position Emission Tomography (PET) data and segmented Magnetic Resonance images (MRI), drawn from the ADNI database, for Control, MCI (Mild Cognitive Impairment Subjects), and AD subjects. Sparse computation fits perfectly here as brain regions usually only interact with a few other areas. The models clearly show different metabolic covariation patters between subject groups, revealing the loss of strong connections in AD and MCI subjects when compared to Controls. Similarly, the variance between GM (Gray Matter) densities of different regions reveals different structural covariation patterns between the different groups. Thus, the different connectivity patterns for controls and AD are used in this paper to select regions of interest in PET and GM images with discriminative power for early AD diagnosis. Finally, functional an structural models are combined to leverage the classification accuracy. The results obtained in this work show the

  18. Pairwise graphical models for structural health monitoring with dense sensor arrays

    Science.gov (United States)

    Mohammadi Ghazi, Reza; Chen, Justin G.; Büyüköztürk, Oral

    2017-09-01

    Through advances in sensor technology and development of camera-based measurement techniques, it has become affordable to obtain high spatial resolution data from structures. Although measured datasets become more informative by increasing the number of sensors, the spatial dependencies between sensor data are increased at the same time. Therefore, appropriate data analysis techniques are needed to handle the inference problem in presence of these dependencies. In this paper, we propose a novel approach that uses graphical models (GM) for considering the spatial dependencies between sensor measurements in dense sensor networks or arrays to improve damage localization accuracy in structural health monitoring (SHM) application. Because there are always unobserved damaged states in this application, the available information is insufficient for learning the GMs. To overcome this challenge, we propose an approximated model that uses the mutual information between sensor measurements to learn the GMs. The study is backed by experimental validation of the method on two test structures. The first is a three-story two-bay steel model structure that is instrumented by MEMS accelerometers. The second experimental setup consists of a plate structure and a video camera to measure the displacement field of the plate. Our results show that considering the spatial dependencies by the proposed algorithm can significantly improve damage localization accuracy.

  19. Parallel flow accumulation algorithms for graphical processing units with application to RUSLE model

    Science.gov (United States)

    Sten, Johan; Lilja, Harri; Hyväluoma, Jari; Westerholm, Jan; Aspnäs, Mats

    2016-04-01

    Digital elevation models (DEMs) are widely used in the modeling of surface hydrology, which typically includes the determination of flow directions and flow accumulation. The use of high-resolution DEMs increases the accuracy of flow accumulation computation, but as a drawback, the computational time may become excessively long if large areas are analyzed. In this paper we investigate the use of graphical processing units (GPUs) for efficient flow accumulation calculations. We present two new parallel flow accumulation algorithms based on dependency transfer and topological sorting and compare them to previously published flow transfer and indegree-based algorithms. We benchmark the GPU implementations against industry standards, ArcGIS and SAGA. With the flow-transfer D8 flow routing model and binary input data, a speed up of 19 is achieved compared to ArcGIS and 15 compared to SAGA. We show that on GPUs the topological sort-based flow accumulation algorithm leads on average to a speedup by a factor of 7 over the flow-transfer algorithm. Thus a total speed up of the order of 100 is achieved. We test the algorithms by applying them to the Revised Universal Soil Loss Equation (RUSLE) erosion model. For this purpose we present parallel versions of the slope, LS factor and RUSLE algorithms and show that the RUSLE erosion results for an area of 12 km x 24 km containing 72 million cells can be calculated in less than a second. Since flow accumulation is needed in many hydrological models, the developed algorithms may find use in many other applications than RUSLE modeling. The algorithm based on topological sorting is particularly promising for dynamic hydrological models where flow accumulations are repeatedly computed over an unchanged DEM.

  20. An approach based on Hierarchical Bayesian Graphical Models for measurement interpretation under uncertainty

    Science.gov (United States)

    Skataric, Maja; Bose, Sandip; Zeroug, Smaine; Tilke, Peter

    2017-02-01

    It is not uncommon in the field of non-destructive evaluation that multiple measurements encompassing a variety of modalities are available for analysis and interpretation for determining the underlying states of nature of the materials or parts being tested. Despite and sometimes due to the richness of data, significant challenges arise in the interpretation manifested as ambiguities and inconsistencies due to various uncertain factors in the physical properties (inputs), environment, measurement device properties, human errors, and the measurement data (outputs). Most of these uncertainties cannot be described by any rigorous mathematical means, and modeling of all possibilities is usually infeasible for many real time applications. In this work, we will discuss an approach based on Hierarchical Bayesian Graphical Models (HBGM) for the improved interpretation of complex (multi-dimensional) problems with parametric uncertainties that lack usable physical models. In this setting, the input space of the physical properties is specified through prior distributions based on domain knowledge and expertise, which are represented as Gaussian mixtures to model the various possible scenarios of interest for non-destructive testing applications. Forward models are then used offline to generate the expected distribution of the proposed measurements which are used to train a hierarchical Bayesian network. In Bayesian analysis, all model parameters are treated as random variables, and inference of the parameters is made on the basis of posterior distribution given the observed data. Learned parameters of the posterior distribution obtained after the training can therefore be used to build an efficient classifier for differentiating new observed data in real time on the basis of pre-trained models. We will illustrate the implementation of the HBGM approach to ultrasonic measurements used for cement evaluation of cased wells in the oil industry.

  1. Architectural Theory and Graphical Criteria for Modelling Certain Late Gothic Projects by Hernan Ruiz "the Elder"

    Directory of Open Access Journals (Sweden)

    Antonio Luis Ampliato Briones

    2014-10-01

    Full Text Available This paper primarily reflects on the need to create graphical codes for producing images intended to communicate architecture. Each step of the drawing needs to be a deliberate process in which the proposed code highlights the relationship between architectural theory and graphic action. Our aim is not to draw the result of the architectural process but the design structure of the actual process; to draw as we design; to draw as we build. This analysis of the work of the Late Gothic architect Hernan Ruiz the Elder, from Cordoba, addresses two aspects: the historical and architectural investigation, and the graphical project for communication purposes.

  2. GRAPHICAL MODELLING OF THE OBJECTS – A BASIC ELEMENT IN TEACHING TECHNICAL DRAWING

    Directory of Open Access Journals (Sweden)

    CLINCIU Ramona

    2015-06-01

    Full Text Available The paper presents applications developed using AutoCAD and 3D Studio MAX programs. The purpose of the applications is represented by the development of the spatial abilities of the students and they have frequent use in teaching technical drawing, for the understanding of the representation of the orthogonal projections of the parts, as well as for the construction of their axonometric projection.

  3. Database Graphic User Interface correspondence with Ellis Information Seeking behavior Model

    Directory of Open Access Journals (Sweden)

    Muhammad Azami

    2010-03-01

    Full Text Available   Graphic User interface serves as a bridge between man and databases. Its primary purpose is to assist uses by establishing interaction with computer systems. Database user interface designers have seldom focused on the impact of user information seeking behaviors on the database user interface structures. Therefore, it is crucial to incorporate the user information seeking behavior within database software design as well as analyzing their impact on upgrade and optimization of user interface environment. The present study intends to determine the degree of correspondence of database interface with information seeking behavioral components of Ellis’ model. The component studied starting, chaining, browsing, differentiating, monitoring and extracting. Investigators employed direct observation method, using a checklist, in order to see how much the database interfaces support these components. Results indicated that the information seeking behavior components outlined by Ellis Model are not fully considered in database user interface design. Some of the components such as starting, chaining and differentiation were to some extent supported by some of database user interfaces studied. However elements such as browsing, monitoring and extracting have not been incorporated within the user interface structures of these databases. On the whole, the degree of correspondence and correlation of database user interfaces with Ellis information seeking components is about average. Therefore incorporating these elements in design and evaluation of user interface environment could have high impact on better optimization of database interface environment and consequently the very process of search and retrieval.

  4. Geometric database maintenance using CCTV cameras and overlay graphics

    Science.gov (United States)

    Oxenberg, Sheldon C.; Landell, B. Patrick; Kan, Edwin

    1988-01-01

    An interactive graphics system using closed circuit television (CCTV) cameras for remote verification and maintenance of a geometric world model database has been demonstrated in GE's telerobotics testbed. The database provides geometric models and locations of objects viewed by CCTV cameras and manipulated by telerobots. To update the database, an operator uses the interactive graphics system to superimpose a wireframe line drawing of an object with known dimensions on a live video scene containing that object. The methodology used is multipoint positioning to easily superimpose a wireframe graphic on the CCTV image of an object in the work scene. An enhanced version of GE's interactive graphics system will provide the object designation function for the operator control station of the Jet Propulsion Laboratory's telerobot demonstration system.

  5. GRAPHIC ADVERTISING, SPECIALIZED COMMUNICATIONS MODEL THROUGH SYMBOLS, WORDS, IMAGES WORDS, IMAGES

    Directory of Open Access Journals (Sweden)

    ADRONACHI Maria

    2011-06-01

    Full Text Available The aim of the paper is to identify the graphic advertising components: symbol, text, colour, to illustrate how they cooperate in order to create the advertising message, and to analyze the corelation product – advertising – consumer.

  6. Inference in Graphical Gaussian Models with Edge and Vertex Symmetries with the gRc Package for R

    DEFF Research Database (Denmark)

    Højsgaard, Søren; Lauritzen, Steffen L

    2007-01-01

    In this paper we present the R package gRc for statistical inference in graphical Gaussian models in which symmetry restrictions have been imposed on the concentration or partial correlation matrix. The models are represented by coloured graphs where parameters associated with edges or vertices o...... of same colour are restricted to being identical. We describe algorithms for maximum likelihood estimation and discuss model selection issues. The paper illustrates the practical use of the gRc package......In this paper we present the R package gRc for statistical inference in graphical Gaussian models in which symmetry restrictions have been imposed on the concentration or partial correlation matrix. The models are represented by coloured graphs where parameters associated with edges or vertices...

  7. Combining features in a graphical model to predict protein binding sites.

    Science.gov (United States)

    Wierschin, Torsten; Wang, Keyu; Welter, Marlon; Waack, Stephan; Stanke, Mario

    2015-05-01

    Large efforts have been made in classifying residues as binding sites in proteins using machine learning methods. The prediction task can be translated into the computational challenge of assigning each residue the label binding site or non-binding site. Observational data comes from various possibly highly correlated sources. It includes the structure of the protein but not the structure of the complex. The model class of conditional random fields (CRFs) has previously successfully been used for protein binding site prediction. Here, a new CRF-approach is presented that models the dependencies of residues using a general graphical structure defined as a neighborhood graph and thus our model makes fewer independence assumptions on the labels than sequential labeling approaches. A novel node feature "change in free energy" is introduced into the model, which is then denoted by ΔF-CRF. Parameters are trained with an online large-margin algorithm. Using the standard feature class relative accessible surface area alone, the general graph-structure CRF already achieves higher prediction accuracy than the linear chain CRF of Li et al. ΔF-CRF performs significantly better on a large range of false positive rates than the support-vector-machine-based program PresCont of Zellner et al. on a homodimer set containing 128 chains. ΔF-CRF has a broader scope than PresCont since it is not constrained to protein subgroups and requires no multiple sequence alignment. The improvement is attributed to the advantageous combination of the novel node feature with the standard feature and to the adopted parameter training method.

  8. Graphic Review

    DEFF Research Database (Denmark)

    Breiting, Søren

    2002-01-01

    Introduktion til 'graphic review' som en metode til at føre forståelse fra en undervisngsgang til den næste i læreruddannelse og grundskole.......Introduktion til 'graphic review' som en metode til at føre forståelse fra en undervisngsgang til den næste i læreruddannelse og grundskole....

  9. Graphics gems

    CERN Document Server

    Glassner, Andrew S

    1993-01-01

    ""The GRAPHICS GEMS Series"" was started in 1990 by Andrew Glassner. The vision and purpose of the Series was - and still is - to provide tips, techniques, and algorithms for graphics programmers. All of the gems are written by programmers who work in the field and are motivated by a common desire to share interesting ideas and tools with their colleagues. Each volume provides a new set of innovative solutions to a variety of programming problems.

  10. Determining species expansion and extinction possibilities using probabilistic and graphical models

    Directory of Open Access Journals (Sweden)

    Chaturvedi Rajesh

    2015-03-01

    Full Text Available Survival of plant species is governed by a number of functions. The participation of each function in species survival and the impact of the contrary behaviour of the species vary from function to function. The probability of extinction of species varies in all such scenarios and has to be calculated separately. Secondly, species follow different patterns of dispersal and localisation at different stages of occupancy state of the site, therefore, the scenarios of competition for resources with climatic shifts leading to deterioration and loss of biodiversity resulting in extinction needs to be studied. Furthermore, most possible deviations of species from climax community states needs to be calculated before species become extinct due to sudden environmental disruption. Globally, various types of anthropogenic disturbances threaten the diversity of biological systems. The impact of these anthropogenic activities needs to be analysed to identify extinction patterns with respect to these activities. All the analyses mentioned above have been tried to be achieved through probabilistic or graphical models in this study.

  11. Analysis of impact of general-purpose graphics processor units in supersonic flow modeling

    Science.gov (United States)

    Emelyanov, V. N.; Karpenko, A. G.; Kozelkov, A. S.; Teterina, I. V.; Volkov, K. N.; Yalozo, A. V.

    2017-06-01

    Computational methods are widely used in prediction of complex flowfields associated with off-normal situations in aerospace engineering. Modern graphics processing units (GPU) provide architectures and new programming models that enable to harness their large processing power and to design computational fluid dynamics (CFD) simulations at both high performance and low cost. Possibilities of the use of GPUs for the simulation of external and internal flows on unstructured meshes are discussed. The finite volume method is applied to solve three-dimensional unsteady compressible Euler and Navier-Stokes equations on unstructured meshes with high resolution numerical schemes. CUDA technology is used for programming implementation of parallel computational algorithms. Solutions of some benchmark test cases on GPUs are reported, and the results computed are compared with experimental and computational data. Approaches to optimization of the CFD code related to the use of different types of memory are considered. Speedup of solution on GPUs with respect to the solution on central processor unit (CPU) is compared. Performance measurements show that numerical schemes developed achieve 20-50 speedup on GPU hardware compared to CPU reference implementation. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.

  12. Extremely large-scale simulation of a Kardar-Parisi-Zhang model using graphics cards.

    Science.gov (United States)

    Kelling, Jeffrey; Ódo, Géza

    2011-12-01

    The octahedron model introduced recently has been implemented onto graphics cards, which permits extremely large-scale simulations via binary lattice gases and bit-coded algorithms. We confirm scaling behavior belonging to the two-dimensional Kardar-Parisi-Zhang universality class and find a surface growth exponent: β = 0.2415(15) on 2(17) × 2(17) systems, ruling out β = 1/4 suggested by field theory. The maximum speedup with respect to a single CPU is 240. The steady state has been analyzed by finite-size scaling and a growth exponent α = 0.393(4) is found. Correction-to-scaling-exponent are computed and the power-spectrum density of the steady state is determined. We calculate the universal scaling functions and cumulants and show that the limit distribution can be obtained by the sizes considered. We provide numerical fitting for the small and large tail behavior of the steady-state scaling function of the interface width.

  13. Latent Variable Graphical Model Selection using Harmonic Analysis: Applications to the Human Connectome Project (HCP).

    Science.gov (United States)

    Kim, Won Hwa; Kim, Hyunwoo J; Adluru, Nagesh; Singh, Vikas

    2016-06-01

    A major goal of imaging studies such as the (ongoing) Human Connectome Project (HCP) is to characterize the structural network map of the human brain and identify its associations with covariates such as genotype, risk factors, and so on that correspond to an individual. But the set of image derived measures and the set of covariates are both large, so we must first estimate a 'parsimonious' set of relations between the measurements. For instance, a Gaussian graphical model will show conditional independences between the random variables, which can then be used to setup specific downstream analyses. But most such data involve a large list of 'latent' variables that remain unobserved, yet affect the 'observed' variables sustantially. Accounting for such latent variables is not directly addressed by standard precision matrix estimation, and is tackled via highly specialized optimization methods. This paper offers a unique harmonic analysis view of this problem. By casting the estimation of the precision matrix in terms of a composition of low-frequency latent variables and high-frequency sparse terms, we show how the problem can be formulated using a new wavelet-type expansion in non-Euclidean spaces. Our formulation poses the estimation problem in the frequency space and shows how it can be solved by a simple sub-gradient scheme. We provide a set of scientific results on ~500 scans from the recently released HCP data where our algorithm recovers highly interpretable and sparse conditional dependencies between brain connectivity pathways and well-known covariates.

  14. Modelling object typicality in description logics

    CSIR Research Space (South Africa)

    Britz, K

    2009-12-01

    Full Text Available base consists of a Tbox which contains terminological axioms, and an Abox which contains assertions, i.e. facts about specific named objects and rela- tionships between objects in the domain. Depending on the expressive power of the DL, a knowledge...”. An interpretation I satisfies C v D, written I C v D, iff CI ⊆ DI . C v D is valid, written |= C v D, iff it is satisfied by all interpretations. Rbox statements include role inclusions of the form R v S, and assertions used to define role proper- ties...

  15. Graphics Gems III IBM version

    CERN Document Server

    Kirk, David

    1994-01-01

    This sequel to Graphics Gems (Academic Press, 1990), and Graphics Gems II (Academic Press, 1991) is a practical collection of computer graphics programming tools and techniques. Graphics Gems III contains a larger percentage of gems related to modeling and rendering, particularly lighting and shading. This new edition also covers image processing, numerical and programming techniques, modeling and transformations, 2D and 3D geometry and algorithms,ray tracing and radiosity, rendering, and more clever new tools and tricks for graphics programming. Volume III also includes a

  16. Trajectory recognition as the basis for object individuation: A functional model of object file instantiation and object token encoding

    Directory of Open Access Journals (Sweden)

    Chris eFields

    2011-03-01

    Full Text Available The perception of persisting visual objects is mediated by transient intermediate representations, object files, that are instantiated in response to some, but not all, visual trajectories. The standard object file concept does not, however, provide a mechanism sufficient to account for all experimental data on visual object persistence, object tracking, and the ability to perceive spatially-disconnected stimuli as continuously-existing objects. Based on relevant anatomical, functional, and developmental data, a functional model is constructed that bases visual object individuation on the recognition of temporal sequences of apparent center-of-mass positions that are specifically identified as trajectories by dedicated trajectory recognition networks downstream of the medial-temporal motion detection area. This model is shown to account for a wide range of data, and to generate a variety of testable predictions. Individual differences in the recognition, abstraction and encoding of trajectory information are expected to generate distinct object persistence judgments and object recognition abilities. Dominance of trajectory information over feature information in stored object tokens during early infancy, in particular, is expected to disrupt the ability to re-identify human and other individuals across perceptual episodes, and lead to developmental outcomes with characteristics of autism spectrum disorders.

  17. Downsizer - A Graphical User Interface-Based Application for Browsing, Acquiring, and Formatting Time-Series Data for Hydrologic Modeling

    Science.gov (United States)

    Ward-Garrison, Christian; Markstrom, Steven L.; Hay, Lauren E.

    2009-01-01

    The U.S. Geological Survey Downsizer is a computer application that selects, downloads, verifies, and formats station-based time-series data for environmental-resource models, particularly the Precipitation-Runoff Modeling System. Downsizer implements the client-server software architecture. The client presents a map-based, graphical user interface that is intuitive to modelers; the server provides streamflow and climate time-series data from over 40,000 measurement stations across the United States. This report is the Downsizer user's manual and provides (1) an overview of the software design, (2) installation instructions, (3) a description of the graphical user interface, (4) a description of selected output files, and (5) troubleshooting information.

  18. A Prototype Educational Model for Hepatobiliary Interventions: Unveiling the Role of Graphic Designers in Medical 3D Printing.

    Science.gov (United States)

    Javan, Ramin; Zeman, Merissa N

    2017-08-14

    In the context of medical three-dimensional (3D) printing, in addition to 3D reconstruction from cross-sectional imaging, graphic design plays a role in developing and/or enhancing 3D-printed models. A custom prototype modular 3D model of the liver was graphically designed depicting segmental anatomy of the parenchyma containing color-coded hepatic vasculature and biliary tree. Subsequently, 3D printing was performed using transparent resin for the surface of the liver and polyamide material to develop hollow internal structures that allow for passage of catheters and wires. A number of concepts were incorporated into the model. A representative mass with surrounding feeding arterial supply was embedded to demonstrate tumor embolization. A straight narrow hollow tract connecting the mass to the surface of the liver, displaying the path of a biopsy device's needle, and the concept of needle "throw" length was designed. A connection between the middle hepatic and right portal veins was created to demonstrate transjugular intrahepatic portosystemic shunt (TIPS) placement. A hollow amorphous structure representing an abscess was created to allow the demonstration of drainage catheter placement with the formation of pigtail tip. Percutaneous biliary drain and cholecystostomy tube placement were also represented. The skills of graphic designers may be utilized in creating highly customized 3D-printed models. A model was developed for the demonstration and simulation of multiple hepatobiliary interventions, for training purposes, patient counseling and consenting, and as a prototype for future development of a functioning interventional phantom.

  19. Graphical Potential Games

    OpenAIRE

    Ortiz, Luis E.

    2015-01-01

    Potential games, originally introduced in the early 1990's by Lloyd Shapley, the 2012 Nobel Laureate in Economics, and his colleague Dov Monderer, are a very important class of models in game theory. They have special properties such as the existence of Nash equilibria in pure strategies. This note introduces graphical versions of potential games. Special cases of graphical potential games have already found applicability in many areas of science and engineering beyond economics, including ar...

  20. Image-Based Multiresolution Implicit Object Modeling

    Directory of Open Access Journals (Sweden)

    Sarti Augusto

    2002-01-01

    Full Text Available We discuss two image-based 3D modeling methods based on a multiresolution evolution of a volumetric function′s level set. In the former method, the role of the level set implosion is to fuse ("sew" and "stitch" together several partial reconstructions (depth maps into a closed model. In the later, the level set′s implosion is steered directly by the texture mismatch between views. Both solutions share the characteristic of operating in an adaptive multiresolution fashion, in order to boost up computational efficiency and robustness.

  1. Graphic Ecologies

    Directory of Open Access Journals (Sweden)

    Brook Weld Muller

    2014-12-01

    Full Text Available This essay describes strategic approaches to graphic representation associated with critical environmental engagement and that build from the idea of works of architecture as stitches in the ecological fabric of the city. It focuses on the building up of partial or fragmented graphics in order to describe inclusive, open-ended possibilities for making architecture that marry rich experience and responsive performance. An aphoristic approach to crafting drawings involves complex layering, conscious absence and the embracing of tension. A self-critical attitude toward the generation of imagery characterized by the notion of ‘loose precision’ may lead to more transformative and environmentally responsive architectures.

  2. Modelling Framework of a Neural Object Recognition

    Directory of Open Access Journals (Sweden)

    Aswathy K S

    2016-02-01

    Full Text Available In many industrial, medical and scientific image processing applications, various feature and pattern recognition techniques are used to match specific features in an image with a known template. Despite the capabilities of these techniques, some applications require simultaneous analysis of multiple, complex, and irregular features within an image as in semiconductor wafer inspection. In wafer inspection discovered defects are often complex and irregular and demand more human-like inspection techniques to recognize irregularities. By incorporating neural network techniques such image processing systems with much number of images can be trained until the system eventually learns to recognize irregularities. The aim of this project is to develop a framework of a machine-learning system that can classify objects of different category. The framework utilizes the toolboxes in the Matlab such as Computer Vision Toolbox, Neural Network Toolbox etc.

  3. Perception in statistical graphics

    Science.gov (United States)

    VanderPlas, Susan Ruth

    There has been quite a bit of research on statistical graphics and visualization, generally focused on new types of graphics, new software to create graphics, interactivity, and usability studies. Our ability to interpret and use statistical graphics hinges on the interface between the graph itself and the brain that perceives and interprets it, and there is substantially less research on the interplay between graph, eye, brain, and mind than is sufficient to understand the nature of these relationships. The goal of the work presented here is to further explore the interplay between a static graph, the translation of that graph from paper to mental representation (the journey from eye to brain), and the mental processes that operate on that graph once it is transferred into memory (mind). Understanding the perception of statistical graphics should allow researchers to create more effective graphs which produce fewer distortions and viewer errors while reducing the cognitive load necessary to understand the information presented in the graph. Taken together, these experiments should lay a foundation for exploring the perception of statistical graphics. There has been considerable research into the accuracy of numerical judgments viewers make from graphs, and these studies are useful, but it is more effective to understand how errors in these judgments occur so that the root cause of the error can be addressed directly. Understanding how visual reasoning relates to the ability to make judgments from graphs allows us to tailor graphics to particular target audiences. In addition, understanding the hierarchy of salient features in statistical graphics allows us to clearly communicate the important message from data or statistical models by constructing graphics which are designed specifically for the perceptual system.

  4. Prediction of Local Quality of Protein Structure Models Considering Spatial Neighbors in Graphical Models

    Science.gov (United States)

    Shin, Woong-Hee; Kang, Xuejiao; Zhang, Jian; Kihara, Daisuke

    2017-01-01

    Protein tertiary structure prediction methods have matured in recent years. However, some proteins defy accurate prediction due to factors such as inadequate template structures. While existing model quality assessment methods predict global model quality relatively well, there is substantial room for improvement in local quality assessment, i.e. assessment of the error at each residue position in a model. Local quality is a very important information for practical applications of structure models such as interpreting/designing site-directed mutagenesis of proteins. We have developed a novel local quality assessment method for protein tertiary structure models. The method, named Graph-based Model Quality assessment method (GMQ), explicitly considers the predicted quality of spatially neighboring residues using a graph representation of a query protein structure model. GMQ uses conditional random field as its core of the algorithm, and performs a binary prediction of the quality of each residue in a model, indicating if a residue position is likely to be within an error cutoff or not. The accuracy of GMQ was improved by considering larger graphs to include quality information of more surrounding residues. Moreover, we found that using different edge weights in graphs reflecting different secondary structures further improves the accuracy. GMQ showed competitive performance on a benchmark for quality assessment of structure models from the Critical Assessment of Techniques for Protein Structure Prediction (CASP). PMID:28074879

  5. HEP graphics and visualization

    CERN Document Server

    Drevermann, Hans; CERN. Geneva

    1992-01-01

    The lectures will give an overview of the use of graphics in high-energy physics, i.e. for detector design, event representation and interactive analysis in 2D and 3D. An introduction to graphics packages (GKS, PHIGS, etc.) will be given, including discussion of the basic concepts of graphics programming. Emphasis is put on new ideas about graphical representation of events. Non-linear visualisation techniques, to improve the ease of understanding, will be described in detail. Physiological aspects, which play a role when using colours and when drawing mathematical objects like points and lines, are discussed. An analysis will be made of the power of graphics to represent very complex data in 2 and 3 dimensions, and the advantages of different representations will be compared.New techniques based on graphics are emerging today, such as multimedia or real-life pictures. Some are used in other domains of scientific research, as will be described and an overview of possible applications in our field will be give...

  6. Nonlinear modeling of an aerospace object dynamics

    Science.gov (United States)

    Davydov, I. E.; Davydov, E. I.

    2017-01-01

    Here are presented the scientific results, obtained by motion modeling of complicated technical systems of aerospace equipment with consideration of nonlinearities. Computerized panel that allows to measure mutual influence of the system's motion and stabilization device with consideration of its real characteristics has been developed. Analysis of motion stability of a system in general has been carried out and time relationships of the system's motion taking in account nonlinearities are presented.

  7. Linear mixed-effects models for within-participant psychology experiments: an introductory tutorial and free, graphical user interface (LMMgui)

    OpenAIRE

    Magezi, David A.

    2015-01-01

    Linear mixed-effects models (LMMs) are increasingly being used for data analysis in cognitive neuroscience and experimental psychology, where within-participant designs are common. The current article provides an introductory review of the use of LMMs for within-participant data analysis and describes a free, simple, graphical user interface (LMMgui). LMMgui uses the package lme4 (Bates et al., 2014a,b) in the statistical environment R (R Core Team).

  8. Linear mixed-effects models for within-participant psychology experiments: an introductory tutorial and free, graphical user interface (LMMgui).

    Science.gov (United States)

    Magezi, David A

    2015-01-01

    Linear mixed-effects models (LMMs) are increasingly being used for data analysis in cognitive neuroscience and experimental psychology, where within-participant designs are common. The current article provides an introductory review of the use of LMMs for within-participant data analysis and describes a free, simple, graphical user interface (LMMgui). LMMgui uses the package lme4 (Bates et al., 2014a,b) in the statistical environment R (R Core Team).

  9. A graphical simulation model of the entire DNA process associated with the analysis of short tandem repeat loci

    OpenAIRE

    Gill, Peter; Curran, James; Elliot, Keith

    2005-01-01

    The use of expert systems to interpret short tandem repeat DNA profiles in forensic, medical and ancient DNA applications is becoming increasingly prevalent as high-throughput analytical systems generate large amounts of data that are time-consuming to process. With special reference to low copy number (LCN) applications, we use a graphical model to simulate stochastic variation associated with the entire DNA process starting with extraction of sample, followed by the processing associated wi...

  10. Probabilistic Graphical Models for the Analysis and Synthesis of Musical Audio

    Science.gov (United States)

    2010-11-01

    at any particular time. To evaluate this transcription technique , we analyzed two pieces of music from the Midi-Aligned Piano Sounds (MAPS) database of...6 2 Background 7 2.1 Inference Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1.1 Variational Inference...transcription accuracy versus binarization threshold τ for two recordings analyzed using two sets of piano samples. . . . . . . . . . . . . 85 9.1 Graphical

  11. THE CAPABILITIES USING OF THREE-DIMENSIONAL MODELING SYSTEM AUTOCAD IN TEACHING TO PERFORM GRAPHICS TASKS

    Directory of Open Access Journals (Sweden)

    A. V. Krasnyuk

    2008-03-01

    Full Text Available Three-dimensional design possibilities of the AutoCAD system for performing graphic tasks are presented in the article. On the basis of the studies conducted the features of application of computer-aided design system are noted and the methods allowing to decrease considerably the quantity of errors at making the drawings are offered.

  12. CASTOR detector Model, objectives and simulated performance

    CERN Document Server

    Angelis, Aris L S; Bartke, Jerzy; Bogolyubsky, M Yu; Chileev, K; Erine, S; Gladysz-Dziadus, E; Kharlov, Yu V; Kurepin, A B; Lobanov, M O; Maevskaya, A I; Mavromanolakis, G; Nicolis, N G; Panagiotou, A D; Sadovsky, S A; Wlodarczyk, Z

    2001-01-01

    We present a phenomenological model describing the formation and evolution of a Centauro fireball in the baryon-rich region in nucleus-nucleus interactions in the upper atmosphere and at the LHC. The small particle multiplicity and imbalance of electromagnetic and hadronic content characterizing a Centauro event and also the strongly penetrating particles (assumed to be strangelets) frequently accompanying them can be naturally explained. We describe the CASTOR calorimeter, a subdetector of the ALICE experiment dedicated to the search for Centauro in the very forward, baryon-rich region of central Pb+Pb collisions at the LHC. The basic characteristics and simulated performance of the calorimeter are presented. (22 refs).

  13. CASTOR detector. Model, objectives and simulated performance

    Energy Technology Data Exchange (ETDEWEB)

    Angelis, A. L. S.; Mavromanolakis, G.; Panagiotou, A. D. [University of Athens, Nuclear and Particle Physics Division, Athens (Greece); Aslanoglou, X.; Nicolis, N. [Ioannina Univ., Ioannina (Greece). Dept. of Physics; Bartke, J.; Gladysz-Dziadus, E. [Institute of Nuclear Physics, Cracow (Poland); Lobanov, M.; Erine, S.; Kharlov, Y.V.; Bogolyubsky, M.Y. [Institute for High Energy Physics, Protvino (Russian Federation); Kurepin, A.B.; Chileev, K. [Institute for Nuclear Research, Moscow (Russian Federation); Wlodarczyk, Z. [Pedagogical University, Institute of Physics, Kielce (Poland)

    2001-10-01

    It is presented a phenomenological model describing the formation and evolution of a Centauro fireball in the baryon-rich region in nucleus-nucleus interactions in the upper atmosphere and at the LHC. The small particle multiplicity and imbalance of electromagnetic and hadronic content characterizing a Centauro event and also the strongly penetrating particles (assumed to be strangelets) frequently accompanying them can be naturally explained. It is described the CASTOR calorimeter, a sub detector of the ALICE experiment dedicated to the search for Centauro in the very forward, baryon-rich region of central Pb+Pb collisions at the LHC. The basic characteristics and simulated performance of the calorimeter are presented.

  14. Graphic notation

    DEFF Research Database (Denmark)

    Bergstrøm-Nielsen, Carl

    2010-01-01

    Graphic notation is taught to music therapy students at Aalborg University in both simple and elaborate forms. This is a method of depicting music visually, and notations may serve as memory aids, as aids for analysis and reflection, and for communication purposes such as supervision or within...

  15. 一种基于图形化建模的遥测参数配置工具%A Telemetry Parameter Configuration Tool Based on Graphic Modeling

    Institute of Scientific and Technical Information of China (English)

    罗毓芳; 李强; 韩洪波

    2012-01-01

    设计了一种基于图形化建模的适用于多星遥测参数配置的软件工具,将实际存在的物理对象作为独立的模块、使用相关图元和界面建立其逻辑模型,并可将遥测参数配置信息和图形模型绑定其中.同时根据遥测参数配置模型的特点,制定适用于遥测参数配置信息的描述规范,根据此规范对逻辑模型进行描述,最终模型以XML(可扩展标记语言)文件形式存储.该工具可在不同卫星之间复用.实践应用表明,该工具可以将配置过程非专业化和可视化,减少重复工作,降低了配置过程中的出错概率,提高配置工作的效率和质量,实现多星遥测参数配置图形化显示和配置工作的批量化.%A graphic modeling tool is designed for configuration of telemetry parameters of multiple satellites. Physical objects are represented as independent modules and graphic elements and GUI (Graphic User Interface) are used to establish their logical model, and telemetry parameter configuration information and graphic models can be bound to it. Specification for description of telemetry parameter configuration information is drawn up based on the characteristics of the model and the logic model is described with the specification. The ultimate model is stored in XML (eXtensible Markup Language) file format. The tool can be reused among different satellites. Engineering applications show that the tool visualizes the configuration process and makes the process more concise. At the same time, it reduces repetitive workload, lowers risks of mistakes and increases the efficiency and quality of configuration process by batch-processing multi-satellite telemetry parameter configuration.

  16. Object interaction competence model v. 2.0

    DEFF Research Database (Denmark)

    Bennedsen, Jens; Schulte, C.

    2013-01-01

    Teaching and learning object oriented programming has to take into account the specific object oriented characteristics of program execution, namely the interaction of objects during runtime. Prior to the research reported in this article, we have developed a competence model for object interaction...

  17. A unified computational model of the development of object unity, object permanence, and occluded object trajectory perception.

    Science.gov (United States)

    Franz, A; Triesch, J

    2010-12-01

    The perception of the unity of objects, their permanence when out of sight, and the ability to perceive continuous object trajectories even during occlusion belong to the first and most important capacities that infants have to acquire. Despite much research a unified model of the development of these abilities is still missing. Here we make an attempt to provide such a unified model. We present a recurrent artificial neural network that learns to predict the motion of stimuli occluding each other and that develops representations of occluded object parts. It represents completely occluded, moving objects for several time steps and successfully predicts their reappearance after occlusion. This framework allows us to account for a broad range of experimental data. Specifically, the model explains how the perception of object unity develops, the role of the width of the occluders, and it also accounts for differences between data for moving and stationary stimuli. We demonstrate that these abilities can be acquired by learning to predict the sensory input. The model makes specific predictions and provides a unifying framework that has the potential to be extended to other visual event categories.

  18. Untangling the complex inter-relationships between horse managers' perceptions of effectiveness of biosecurity practices using Bayesian graphical modelling.

    Science.gov (United States)

    Schemann, Kathrin; Lewis, Fraser I; Firestone, Simon M; Ward, Michael P; Toribio, Jenny-Ann L M L; Taylor, Melanie R; Dhand, Navneet K

    2013-05-15

    On-farm biosecurity practices have been promoted in many animal industries to protect animal populations from infections. Current approaches based on regression modelling techniques for assessing biosecurity perceptions and practices are limited for analysis of the interrelationships between multivariate data. A suitable approach, which does not require background knowledge of relationships, is provided by Bayesian network modelling. Here we apply such an approach to explore the complex interrelationships between the variables representing horse managers' perceptions of effectiveness of on-farm biosecurity practices. The dataset was derived from interviews conducted with 200 horse managers in Australia after the 2007 equine influenza outbreak. Using established computationally intensive techniques, an optimal graphical statistical model was identified whose structure was objectively determined, directly from the observed data. This methodology is directly analogous to multivariate regression (i.e. multiple response variables). First, an optimal model structure was identified using an exact (exhaustive) search algorithm, followed by pruning the selected model for over-fitting by the parametric bootstrapping approach. Perceptions about effectiveness of movement restrictions and access control were linked but were generally segregated from the perceptions about effectiveness of personal and equipment hygiene. Horse managers believing in the effectiveness of complying with movement restrictions in stopping equine influenza spread onto their premises were also more likely to believe in the effectiveness of reducing their own contact with other horses and curtailing professional visits. Similarly, the variables representing the effectiveness of disinfecting vehicles, using a disinfectant footbath, changing into clean clothes on arrival at the premises and washing hands before contact with managed horses were clustered together. In contrast, horse managers believing in

  19. Multi-objective vs. single-objective calibration of a hydrologic model using single- and multi-objective screening

    Science.gov (United States)

    Mai, Juliane; Cuntz, Matthias; Shafii, Mahyar; Zink, Matthias; Schäfer, David; Thober, Stephan; Samaniego, Luis; Tolson, Bryan

    2016-04-01

    Hydrologic models are traditionally calibrated against observed streamflow. Recent studies have shown however, that only a few global model parameters are constrained using this kind of integral signal. They can be identified using prior screening techniques. Since different objectives might constrain different parameters, it is advisable to use multiple information to calibrate those models. One common approach is to combine these multiple objectives (MO) into one single objective (SO) function and allow the use of a SO optimization algorithm. Another strategy is to consider the different objectives separately and apply a MO Pareto optimization algorithm. In this study, two major research questions will be addressed: 1) How do multi-objective calibrations compare with corresponding single-objective calibrations? 2) How much do calibration results deteriorate when the number of calibrated parameters is reduced by a prior screening technique? The hydrologic model employed in this study is a distributed hydrologic model (mHM) with 52 model parameters, i.e. transfer coefficients. The model uses grid cells as a primary hydrologic unit, and accounts for processes like snow accumulation and melting, soil moisture dynamics, infiltration, surface runoff, evapotranspiration, subsurface storage and discharge generation. The model is applied in three distinct catchments over Europe. The SO calibrations are performed using the Dynamically Dimensioned Search (DDS) algorithm with a fixed budget while the MO calibrations are achieved using the Pareto Dynamically Dimensioned Search (PA-DDS) algorithm allowing for the same budget. The two objectives used here are the Nash Sutcliffe Efficiency (NSE) of the simulated streamflow and the NSE of the logarithmic transformation. It is shown that the SO DDS results are located close to the edges of the Pareto fronts of the PA-DDS. The MO calibrations are hence preferable due to their supply of multiple equivalent solutions from which the

  20. Vocational Teaching Cube System of Engineering Graphics

    Institute of Scientific and Technical Information of China (English)

    YangDaofu; LiuShenli

    2003-01-01

    Based on long-time research on vocational teaching cube theory in graphics education and analyzing on the intellectual structure in the process of reading engineering drawing, the graphics intellectual three-dimensional model, which is made up of 100 cubes, is founded and tested in higher vocational graphics education. This system serves as a good guidance to the graphics teaching.

  1. Understanding the 2.5th dimension: modelling the graphic language of products

    OpenAIRE

    Mulder-Nijkamp, Maaike; Eggink, Wouter; Kovacevic, Ahmed; Ion, William; McMahon, Christopher

    2011-01-01

    Recognizing a product of a specific brand without seeing the logo is difficult. But for companies it is important to distinguish themselves from competitors with a consistent portfolio, which will be easily recognized by their target consumers. The recognition of brands and their associated brand values can take place in different ways. In this paper a framework is discussed to analyze a brand at different levels of graphical dimensions. The proposed framework distinguishes the difference bet...

  2. Modeling 4D Human-Object Interactions for Joint Event Segmentation, Recognition, and Object Localization.

    Science.gov (United States)

    Wei, Ping; Zhao, Yibiao; Zheng, Nanning; Zhu, Song-Chun

    2016-06-01

    In this paper, we present a 4D human-object interaction (4DHOI) model for solving three vision tasks jointly: i) event segmentation from a video sequence, ii) event recognition and parsing, and iii) contextual object localization. The 4DHOI model represents the geometric, temporal, and semantic relations in daily events involving human-object interactions. In 3D space, the interactions of human poses and contextual objects are modeled by semantic co-occurrence and geometric compatibility. On the time axis, the interactions are represented as a sequence of atomic event transitions with coherent objects. The 4DHOI model is a hierarchical spatial-temporal graph representation which can be used for inferring scene functionality and object affordance. The graph structures and parameters are learned using an ordered expectation maximization algorithm which mines the spatial-temporal structures of events from RGB-D video samples. Given an input RGB-D video, the inference is performed by a dynamic programming beam search algorithm which simultaneously carries out event segmentation, recognition, and object localization. We collected and released a large multiview RGB-D event dataset which contains 3,815 video sequences and 383,036 RGB-D frames captured by three RGB-D cameras. The experimental results on three challenging datasets demonstrate the strength of the proposed method.

  3. 3D Object Recognition Based on Linear Lie Algebra Model

    Institute of Scientific and Technical Information of China (English)

    LI Fang-xing; WU Ping-dong; SUN Hua-fei; PENG Lin-yu

    2009-01-01

    A surface model called the fibre bundle model and a 3D object model based on linear Lie algebra model are proposed.Then an algorithm of 3D object recognition using the linear Lie algebra models is presented.It is a convenient recognition method for the objects which are symmetric about some axis.By using the presented algorithm,the representation matrices of the fibre or the base curve from only finite points of the linear Lie algebra model can be obtained.At last some recognition results of practicalities are given.

  4. Viewpoints: a framework for object oriented database modelling and distribution

    Directory of Open Access Journals (Sweden)

    Fouzia Benchikha

    2006-01-01

    Full Text Available The viewpoint concept has received widespread attention recently. Its integration into a data model improves the flexibility of the conventional object-oriented data model and allows one to improve the modelling power of objects. The viewpoint paradigm can be used as a means of providing multiple descriptions of an object and as a means of mastering the complexity of current database systems enabling them to be developed in a distributed manner. The contribution of this paper is twofold: to define an object data model integrating viewpoints in databases and to present a federated database system integrating multiple sources following a local-as-extended-view approach.

  5. A General Polygon-based Deformable Model for Object Recognition

    DEFF Research Database (Denmark)

    Jensen, Rune Fisker; Carstensen, Jens Michael

    1999-01-01

    We propose a general scheme for object localization and recognition based on a deformable model. The model combines shape and image properties by warping a arbitrary prototype intensity template according to the deformation in shape. The shape deformations are constrained by a probabilistic...... distribution, which combined with a match of the warped intensity template and the image form the final criteria used for localization and recognition of a given object. The chosen representation gives the model an ability to model an almost arbitrary object. Beside the actual model a full general scheme...

  6. An Object Extraction Model Using Association Rules and Dependence Analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Extracting objects from legacy systems is a basic step insystem's obje ct-orientation to improve the maintainability and understandability of the syst e ms. A new object extraction model using association rules an d dependence analysis is proposed. In this model data are classified by associat ion rules and the corresponding operations are partitioned by dependence analysis.

  7. Object Oriented Toolbox for Modelling and Simulation of Dynamic Systems

    DEFF Research Database (Denmark)

    Thomsen, Per Grove; Poulsen, Mikael Zebbelin; Wagner, Falko Jens;

    1999-01-01

    Design and Implementation of a simulation toolbox based on Object Oriented modelling Techniques.Experimental implementation in C++ using the Godess ODE-solution platform.......Design and Implementation of a simulation toolbox based on Object Oriented modelling Techniques.Experimental implementation in C++ using the Godess ODE-solution platform....

  8. Conceptual Modeling of Events as Information Objects and Change Agents

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    as a totality of an information object and a change agent. When an event is modeled as an information object it is comparable to an entity that exists only at a specific point in time. It has attributes and can be used for querying and specification of constraints. When an event is modeled as a change agent...

  9. The Application Model of Moving Objects in Cargo Delivery System

    Institute of Scientific and Technical Information of China (English)

    ZHANG Feng-li; ZHOU Ming-tian; XU Bo

    2004-01-01

    The development of spatio-temporal database systems is primarily motivated by applications which track and present mobile objects. In this paper, solutions for establishing the moving object database based on GPS/GIS environment are presented, and a data modeling of moving object is given by using Temporal logical to extent the query language, finally the application model in cargo delivery system is shown.

  10. A Deep-Structured Conditional Random Field Model for Object Silhouette Tracking.

    Directory of Open Access Journals (Sweden)

    Mohammad Javad Shafiee

    Full Text Available In this work, we introduce a deep-structured conditional random field (DS-CRF model for the purpose of state-based object silhouette tracking. The proposed DS-CRF model consists of a series of state layers, where each state layer spatially characterizes the object silhouette at a particular point in time. The interactions between adjacent state layers are established by inter-layer connectivity dynamically determined based on inter-frame optical flow. By incorporate both spatial and temporal context in a dynamic fashion within such a deep-structured probabilistic graphical model, the proposed DS-CRF model allows us to develop a framework that can accurately and efficiently track object silhouettes that can change greatly over time, as well as under different situations such as occlusion and multiple targets within the scene. Experiment results using video surveillance datasets containing different scenarios such as occlusion and multiple targets showed that the proposed DS-CRF approach provides strong object silhouette tracking performance when compared to baseline methods such as mean-shift tracking, as well as state-of-the-art methods such as context tracking and boosted particle filtering.

  11. Multivariate determinants of self-management in Health Care: assessing Health Empowerment Model by comparison between structural equation and graphical models approaches

    Directory of Open Access Journals (Sweden)

    Filippo Trentini

    2015-03-01

    Full Text Available Backgroung. In public health one debated issue is related to consequences of improper self-management in health care.  Some theoretical models have been proposed in Health Communication theory which highlight how components such general literacy and specific knowledge of the disease might be very important for effective actions in healthcare system.  Methods. This  paper aims at investigating the consistency of Health Empowerment Model by means of both graphical models approach, which is a “data driven” method and a Structural Equation Modeling (SEM approach, which is instead “theory driven”, showing the different information pattern that can be revealed in a health care research context.The analyzed dataset provides data on the relationship between the Health Empowerment Model constructs and the behavioral and health status in 263 chronic low back pain (cLBP patients. We used the graphical models approach to evaluate the dependence structure in a “blind” way, thus learning the structure from the data.Results. From the estimation results dependence structure confirms links design assumed in SEM approach directly from researchers, thus validating the hypotheses which generated the Health Empowerment Model constructs.Conclusions. This models comparison helps in avoiding confirmation bias. In Structural Equation Modeling, we used SPSS AMOS 21 software. Graphical modeling algorithms were implemented in a R software environment.

  12. Several Strategies on 3D Modeling of Manmade Objects

    Institute of Scientific and Technical Information of China (English)

    SHAO Zhenfeng; LI Deren; CHENG Qimin

    2004-01-01

    Several different strategies of 3D modeling are adopted for different kinds of manmade objects. Firstly, for those manmade objects with regular structure, if 2D information is available and elevation information can be obtained conveniently, then 3D modeling of them can be executed directly. Secondly, for those manmade objects with complicated structure comparatively and related stereo images pair can be acquired, in the light of topology-based 3D model we finish 3D modeling of them by integrating automatic and semi-automatic object extraction. Thirdly, for the most complicated objects whose geometrical information cannot be got from stereo images pair completely, we turn to topological 3D model based on CAD.

  13. Continuum neural dynamics models for visual object identification

    Science.gov (United States)

    Singh, Vijay; Tchernookov, Martin; Nemenman, Ilya

    2013-03-01

    Visual object identification has remained one of the most challenging problems even after decades of research. Most of the current models of the visual cortex represent neurons as discrete elements in a largely feedforward network arrangement. They are generally very specific in the objects they can identify. We develop a continuum model of recurrent, nonlinear neural dynamics in the primary visual cortex, incorporating connectivity patterns and other experimentally observed features of the cortex. The model has an interesting correspondence to the Landau-DeGennes theory of a nematic liquid crystal in two dimensions. We use collective spatiotemporal excitations of the model cortex as a signal for segmentation of contiguous objects from the background clutter. The model is capable of suppressing clutter in images and filling in occluded elements of object contours, resulting in high-precision, high-recall identification of large objects from cluttered scenes. This research has been partially supported by the ARO grant No. 60704-NS-II.

  14. Protein Nano-Object Integrator (ProNOI for generating atomic style objects for molecular modeling

    Directory of Open Access Journals (Sweden)

    Smith Nicholas

    2012-12-01

    Full Text Available Abstract Background With the progress of nanotechnology, one frequently has to model biological macromolecules simultaneously with nano-objects. However, the atomic structures of the nano objects are typically not available or they are solid state entities. Because of that, the researchers have to investigate such nano systems by generating models of the nano objects in a manner that the existing software be able to carry the simulations. In addition, it should allow generating composite objects with complex shape by combining basic geometrical figures and embedding biological macromolecules within the system. Results Here we report the Protein Nano-Object Integrator (ProNOI which allows for generating atomic-style geometrical objects with user desired shape and dimensions. Unlimited number of objects can be created and combined with biological macromolecules in Protein Data Bank (PDB format file. Once the objects are generated, the users can use sliders to manipulate their shape, dimension and absolute position. In addition, the software offers the option to charge the objects with either specified surface or volumetric charge density and to model them with user-desired dielectric constants. According to the user preference, the biological macromolecule atoms can be assigned charges and radii according to four different force fields: Amber, Charmm, OPLS and PARSE. The biological macromolecules and the atomic-style objects are exported as a position, charge and radius (PQR file, or if a default dielectric constant distribution is not selected, it is exported as a position, charge, radius and epsilon (PQRE file. As illustration of the capabilities of the ProNOI, we created a composite object in a shape of a robot, aptly named the Clemson Robot, whose parts are charged with various volumetric charge densities and holds the barnase-barstar protein complex in its hand. Conclusions The Protein Nano-Object Integrator (ProNOI is a convenient tool for

  15. Protein Nano-Object Integrator (ProNOI) for generating atomic style objects for molecular modeling.

    Science.gov (United States)

    Smith, Nicholas; Campbell, Brandon; Li, Lin; Li, Chuan; Alexov, Emil

    2012-12-05

    With the progress of nanotechnology, one frequently has to model biological macromolecules simultaneously with nano-objects. However, the atomic structures of the nano objects are typically not available or they are solid state entities. Because of that, the researchers have to investigate such nano systems by generating models of the nano objects in a manner that the existing software be able to carry the simulations. In addition, it should allow generating composite objects with complex shape by combining basic geometrical figures and embedding biological macromolecules within the system. Here we report the Protein Nano-Object Integrator (ProNOI) which allows for generating atomic-style geometrical objects with user desired shape and dimensions. Unlimited number of objects can be created and combined with biological macromolecules in Protein Data Bank (PDB) format file. Once the objects are generated, the users can use sliders to manipulate their shape, dimension and absolute position. In addition, the software offers the option to charge the objects with either specified surface or volumetric charge density and to model them with user-desired dielectric constants. According to the user preference, the biological macromolecule atoms can be assigned charges and radii according to four different force fields: Amber, Charmm, OPLS and PARSE. The biological macromolecules and the atomic-style objects are exported as a position, charge and radius (PQR) file, or if a default dielectric constant distribution is not selected, it is exported as a position, charge, radius and epsilon (PQRE) file. As illustration of the capabilities of the ProNOI, we created a composite object in a shape of a robot, aptly named the Clemson Robot, whose parts are charged with various volumetric charge densities and holds the barnase-barstar protein complex in its hand. The Protein Nano-Object Integrator (ProNOI) is a convenient tool for generating atomic-style nano shapes in conjunction with

  16. A Bayesian Alternative for Multi-objective Ecohydrological Model Specification

    Science.gov (United States)

    Tang, Y.; Marshall, L. A.; Sharma, A.; Ajami, H.

    2015-12-01

    Process-based ecohydrological models combine the study of hydrological, physical, biogeochemical and ecological processes of the catchments, which are usually more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov Chain Monte Carlo (MCMC) techniques. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological framework. In our study, a formal Bayesian approach is implemented in an ecohydrological model which combines a hydrological model (HyMOD) and a dynamic vegetation model (DVM). Simulations focused on one objective likelihood (Streamflow/LAI) and multi-objective likelihoods (Streamflow and LAI) with different weights are compared. Uniform, weakly informative and strongly informative prior distributions are used in different simulations. The Kullback-leibler divergence (KLD) is used to measure the dis(similarity) between different priors and corresponding posterior distributions to examine the parameter sensitivity. Results show that different prior distributions can strongly influence posterior distributions for parameters, especially when the available data is limited or parameters are insensitive to the available data. We demonstrate differences in optimized parameters and uncertainty limits in different cases based on multi-objective likelihoods vs. single objective likelihoods. We also demonstrate the importance of appropriately defining the weights of objectives in multi-objective calibration according to different data types.

  17. An object-oriented modelling framework for the arterial wall.

    Science.gov (United States)

    Balaguera, M I; Briceño, J C; Glazier, J A

    2010-02-01

    An object-oriented modelling framework for the arterial wall is presented. The novelty of the framework is the possibility to generate customizable artery models, taking advantage of imaging technology. In our knowledge, this is the first object-oriented modelling framework for the arterial wall. Existing models do not allow close structural mapping with arterial microstructure as in the object-oriented framework. In the implemented model, passive behaviour of the arterial wall was considered and the tunica adventitia was the objective system. As verification, a model of an arterial segment was generated. In order to simulate its deformation, a matrix structural mechanics simulator was implemented. Two simulations were conducted, one for an axial loading test and other for a pressure-volume test. Each simulation began with a sensitivity analysis in order to determinate the best parameter combination and to compare the results with analogue controls. In both cases, the simulated results closely reproduced qualitatively and quantitatively the analogue control plots.

  18. 3D for Graphic Designers

    CERN Document Server

    Connell, Ellery

    2011-01-01

    Helping graphic designers expand their 2D skills into the 3D space The trend in graphic design is towards 3D, with the demand for motion graphics, animation, photorealism, and interactivity rapidly increasing. And with the meteoric rise of iPads, smartphones, and other interactive devices, the design landscape is changing faster than ever.2D digital artists who need a quick and efficient way to join this brave new world will want 3D for Graphic Designers. Readers get hands-on basic training in working in the 3D space, including product design, industrial design and visualization, modeling, ani

  19. Dynamic Cell Formation based on Multi-objective Optimization Model

    Directory of Open Access Journals (Sweden)

    Guozhu Jia

    2013-08-01

    Full Text Available In this paper, a multi-objective model is proposed to address the dynamic cellular manufacturing (DCM formation problem. This model considers four conflicting objectives: relocation cost, machine utilization, material handling cost and maintenance cost. The model also considers the situation that some machines could be shared by more than one cell at the same period. A genetic algorithm is applied to get the solution of this mathematical model. Three numerical examples are simulated to evaluate the validity of this model.  

  20. Improved parameter estimation for hydrological models using weighted object functions

    NARCIS (Netherlands)

    Stein, A.; Zaadnoordijk, W.J.

    1999-01-01

    This paper discusses the sensitivity of calibration of hydrological model parameters to different objective functions. Several functions are defined with weights depending upon the hydrological background. These are compared with an objective function based upon kriging. Calibration is applied to pi

  1. VFM:Visual Feedback Model for Robust Object Recognition

    Institute of Scientific and Technical Information of China (English)

    王冲; 黄凯奇

    2015-01-01

    Object recognition, which consists of classification and detection, has two important attributes for robustness:1) closeness: detection windows should be as close to object locations as possible, and 2) adaptiveness: object matching should be adaptive to object variations within an object class. It is difficult to satisfy both attributes using traditional methods which consider classification and detection separately; thus recent studies propose to combine them based on confidence contextualization and foreground modeling. However, these combinations neglect feature saliency and object structure, and biological evidence suggests that the feature saliency and object structure can be important in guiding the recognition from low level to high level. In fact, ob ject recognition originates in the mechanism of “what” and “where”pathways in human visual systems. More importantly, these pathways have feedback to each other and exchange useful information, which may improve closeness and adaptiveness. Inspired by the visual feedback, we propose a robust object recognition framework by designing a computational visual feedback model (VFM) between classification and detection. In the “what” feedback, the feature saliency from classification is exploited to rectify detection windows for better closeness;while in the “where” feedback, object parts from detection are used to match object structure for better adaptiveness. Experimental results show that the “what” and “where” feedback is effective to improve closeness and adaptiveness for ob ject recognition, and encouraging improvements are obtained on the challenging PASCAL VOC 2007 dataset.

  2. Modeling of Location Estimation for Object Tracking in WSN

    Directory of Open Access Journals (Sweden)

    Hung-Chi Chu

    2013-01-01

    Full Text Available Location estimation for object tracking is one of the important topics in the research of wireless sensor networks (WSNs. Recently, many location estimation or position schemes in WSN have been proposed. In this paper, we will propose the procedure and modeling of location estimation for object tracking in WSN. The designed modeling is a simple scheme without complex processing. We will use Matlab to conduct the simulation and numerical analyses to find the optimal modeling variables. The analyses with different variables will include object moving model, sensing radius, model weighting value α, and power-level increasing ratio k of neighboring sensor nodes. For practical consideration, we will also carry out the shadowing model for analysis.

  3. The Game Object Model and expansive learning: Creation ...

    African Journals Online (AJOL)

    The Game Object Model and expansive learning: Creation, instantiation, ... into the design, integration, evaluation and use of video games in learning and teaching. ... individual understanding of the role of games in education and transformed ...

  4. A contact stress model for multifingered grasps of rough objects

    Science.gov (United States)

    Sinha, Pramath Raj; Abel, Jacob M.

    1990-01-01

    The model developed utilizes a contact-stress analysis of an arbitrarily shaped object in a multifingered grasp. The fingers and the object are all treated as elastic bodies, and the region of contact is modeled as a deformable surface patch. The relationship between the friction and normal forces is nonlocal and nonlinear in nature and departs from the Coulomb approximation. The nature of the constraints arising out of conditions for compatibility and static equilibrium motivated the formulation of the model as a nonlinear constrained minimization problem. The model is able to predict the magnitude of the inwardly directed normal forces and both the magnitude and direction of the tangential (friction) forces at each finger-object interface for grasped objects in static equilibrium.

  5. Research on the Adaptive Object-Model Architecture Style

    Institute of Scientific and Technical Information of China (English)

    YAO Hai-qiong; NI Gui-qiang

    2004-01-01

    The rapidly changing requirements and business rules stimulate software developers to make their applications more dynamic, configurable, and adaptable. An effective way to meet such requirements is to apply an adaptive object-model (AOM). The AOM architecture style is composed of metamodel, model engine and tools. Firstly, two small patterns for building up metamodel are analyzed in detail. Then model engine for interpreting metamodel and tools for end-uses to define and configure object models are discussed. Finally, a novel platform-applicationware-is proposed.

  6. 一种用于图形设备接口的笔画模型方法%A Method Named Stroke Modeling for Building Graphics Device Interface

    Institute of Scientific and Technical Information of China (English)

    黄东运; 雷欢; 卢杏坚

    2012-01-01

      When the live video surveillance system interacts with users, it needs fast implementation for drawing or painting user controls, interested region picks, activity marks as well as digital images to avoid a frozen display, that requires the graphics device interface modules to draw or paint quickly. We propose a new method named stroke modeling, which takes the advantages of drawing primitives provided by DirectX to transform drawing tasks into stroke objects, enabling the video systems in its run time to fulfill a fast implementation for render or display by submitting the pre-generated stoke objects to the DirectX, cutting off the drawing time dramatically. Following the method, we have developed a graphics device interface module which may be used for quick graphical rendering or drawing.%  视频监控的在屏交互过程中,需要快速地对交互控件、感兴趣区域、活动标记等图形和图像进行绘制,为避免画面的停滞感,要求图形设备接口能快速进行绘制。为此,提出了一种笔画模型方法。利用 DirectX的原始绘制能力,该方法将绘制任务转换成笔画对象,使视频系统在实时运行过程中,只需将已转换的笔画对象的绘制数据提交给 DirectX 进行渲染和显示即可,降低重复绘制时间,满足系统的快速绘制要求。基于笔画模型方法,开发了一种可快速绘制的图形设备接口。

  7. ASAMgpu V1.0 – a moist fully compressible atmospheric model using graphics processing units (GPUs

    Directory of Open Access Journals (Sweden)

    S. Horn

    2012-03-01

    Full Text Available In this work the three dimensional compressible moist atmospheric model ASAMgpu is presented. The calculations are done using graphics processing units (GPUs. To ensure platform independence OpenGL and GLSL are used, with that the model runs on any hardware supporting fragment shaders. The MPICH2 library enables interprocess communication allowing the usage of more than one GPU through domain decomposition. Time integration is done with an explicit three step Runge-Kutta scheme with a time-splitting algorithm for the acoustic waves. The results for four test cases are shown in this paper. A rising dry heat bubble, a cold bubble induced density flow, a rising moist heat bubble in a saturated environment, and a DYCOMS-II case.

  8. ASAMgpu V1.0 – a moist fully compressible atmospheric model using graphics processing units (GPUs

    Directory of Open Access Journals (Sweden)

    S. Horn

    2011-10-01

    Full Text Available In this work the three dimensional compressible moist atmospheric model ASAMgpu is presented. The calculations are done using graphics processing units (GPUs. To ensure platform independence OpenGL and GLSL is used, with that the model runs on any hardware supporting fragment shaders. The MPICH2 library enables interprocess communication allowing the usage of more than one GPU through domain decomposition. Time integration is done with an explicit three step Runge-Kutta scheme with a timesplitting algorithm for the acoustic waves. The results for four test cases are shown in this paper. A rising dry heat bubble, a cold bubble induced density flow, a rising moist heat bubble in a saturated environment and a DYCOMS-II case.

  9. A PDP model of the simultaneous perception of multiple objects

    Science.gov (United States)

    Henderson, Cynthia M.; McClelland, James L.

    2011-06-01

    Illusory conjunctions in normal and simultanagnosic subjects are two instances where the visual features of multiple objects are incorrectly 'bound' together. A connectionist model explores how multiple objects could be perceived correctly in normal subjects given sufficient time, but could give rise to illusory conjunctions with damage or time pressure. In this model, perception of two objects benefits from lateral connections between hidden layers modelling aspects of the ventral and dorsal visual pathways. As with simultanagnosia, simulations of dorsal lesions impair multi-object recognition. In contrast, a large ventral lesion has minimal effect on dorsal functioning, akin to dissociations between simple object manipulation (retained in visual form agnosia and semantic dementia) and object discrimination (impaired in these disorders) [Hodges, J.R., Bozeat, S., Lambon Ralph, M.A., Patterson, K., and Spatt, J. (2000), 'The Role of Conceptual Knowledge: Evidence from Semantic Dementia', Brain, 123, 1913-1925; Milner, A.D., and Goodale, M.A. (2006), The Visual Brain in Action (2nd ed.), New York: Oxford]. It is hoped that the functioning of this model might suggest potential processes underlying dorsal and ventral contributions to the correct perception of multiple objects.

  10. A Biological Hierarchical Model Based Underwater Moving Object Detection

    Directory of Open Access Journals (Sweden)

    Jie Shen

    2014-01-01

    Full Text Available Underwater moving object detection is the key for many underwater computer vision tasks, such as object recognizing, locating, and tracking. Considering the super ability in visual sensing of the underwater habitats, the visual mechanism of aquatic animals is generally regarded as the cue for establishing bionic models which are more adaptive to the underwater environments. However, the low accuracy rate and the absence of the prior knowledge learning limit their adaptation in underwater applications. Aiming to solve the problems originated from the inhomogeneous lumination and the unstable background, the mechanism of the visual information sensing and processing pattern from the eye of frogs are imitated to produce a hierarchical background model for detecting underwater objects. Firstly, the image is segmented into several subblocks. The intensity information is extracted for establishing background model which could roughly identify the object and the background regions. The texture feature of each pixel in the rough object region is further analyzed to generate the object contour precisely. Experimental results demonstrate that the proposed method gives a better performance. Compared to the traditional Gaussian background model, the completeness of the object detection is 97.92% with only 0.94% of the background region that is included in the detection results.

  11. Behavioral models as theoretical frames to analyze the business objective

    Directory of Open Access Journals (Sweden)

    Hernán Alonso Bafico

    2015-12-01

    Full Text Available This paper examines Pfeffer’s Models of Behavior and connects each of them with attributes of the definition of the firm’s objective, assumed as the maximization of the sustainable, long term valor of the residual claims.Each of the five models of behavior (rational, social, moral, retrospective and cognitive contributes to the decision making and goal setting processes with its particular and complementary elements. From those assuming complete rationality and frictionless markets, to the models emphasizing the role of ethical positions, and the presence of perceptive and cognitive mechanisms. The analysis highlights the main contributions of critical theories and models of behavior, underlining their focus on non-traditional variables, regarded as critical inputs for goal setting processes and designing alternative executive incentive schemes.  The explicit consideration of those variables does not indicate the need for a new definition of corporate objective. The maximization of the long term value of the shareholders’ claims still defines the relevant objective function of the firm, remaining as the main yardstick of corporate performance.Behavioral models are recognized as important tools to help managers direct their attention to long term strategies. In the last part, we comment on the relationship between the objective function and behavioral models, from the practitioners’ perspective.Key words: Firm Objectives, Behavioral Models, Value Maximization, Stakeholder Theory.

  12. A model-view-controller approach to object persistence

    Energy Technology Data Exchange (ETDEWEB)

    Heinckiens, P.; Tromp, H.; Hoffman, G. [Univ. of Ghent, Gent (Belgium)

    1995-12-31

    The model-view-controller framework (MVC) is discussed to some extent, first within the context of a user interface to a business model. After a new, generic definition of object persistence is presented, it is shown that the concept of persistence, according to that definition, can also be fitted within a model-view-controller scheme. That will lead to a unified MVC concept, which can easily be further generalized to other related issues, such as interprocess communication and other forms of communications with the outside world. SCOOP (Scalable Object Oriented Persistence), a set of C++ classes which support object persistence, is presented. It is shown how it offers a seamless migration path from more traditional to fall blown object-oriented database systems. The position of SCOOP within the MVC architecture is discussed.

  13. AN OBJECT ORIENTED MODEL SCHEDULING FOR MEDIA-SOC

    Institute of Scientific and Technical Information of China (English)

    Cheng Xingmei; Yao Yingbiao; Zhang Yixiong; Liu Peng; Yao Qingdong

    2009-01-01

    This paper proposes an object oriented model scheduling for parallel computing in media MultiProcessors System on Chip (MPSoC). Firstly, the Coarse Grain Data Flow Graph (CGDFG) parallel programming model is used in this approach. Secondly, this approach has the feature of unified abstraction for software objects implementing in processor and hardware objects implementing in ASICs, easy for mapping CGDFG programming on MPSoC. This approach cuts down the kernel overhead and reduces the code size effectively. The principle of the oriented object model, the method of scheduling, and how to map a parallel programming through CGDFG to the MPSoC are analyzed in this approach. This approach also compares the code size and execution cycles with conventional control flow scheduling, and presents respective management overhead for one application in media-SoC.

  14. Superimposing of virtual graphics and real image based on 3D CAD information

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Proposes methods of transforming 3D CAD models into 2D graphics and recognizing 3D objects by features and superimposing VE built in computer onto real image taken by a CCD camera, and presents computer simulation results.

  15. Modeling the Visual and Linguistic Importance of Objects

    Directory of Open Access Journals (Sweden)

    Moreno Ignazio Coco

    2012-05-01

    Full Text Available Previous work measuring the visual importance of objects has shown that only spatial information, such as object position and size, is predictive of importance, whilst low-level visual information, such as saliency, is not (Spain and Perona 2010, IJCV 91, 59–76. Objects are not important solely on the basis of their appearance. Rather, they are important because of their contextual information (eg, a pen in an office versus in a bathroom, which is needed in tasks requiring cognitive control (eg, visual search; Henderson 2007, PsySci 16 219–222. Given that most visual objects have a linguistic counterpart, their importance depends also on linguistic information, especially in tasks where language is actively involved—eg, naming. In an eye-tracking naming study, where participants are asked to name 5 objects in a scene, we investigated how visual saliency, contextual features, and linguistic information of the mentioned objects predicted their importance. We measured object importance based on the urn model of Spain and Perona (2010 and estimated the predictive role of visual and linguistic features using different regression frameworks: LARS (Efron et al 2004, Annals of Statistics 32 407–499 and LME (Baayen et al 2008, JML 59, 390–412. Our results confirmed the role of spatial information in predicting object importance, and in addition, we found effects of saliency. Crucially to our hypothesis, we demonstrated that the lexical frequency of objects and their contextual fit in the scene significantly contributed to object importance.

  16. GSMNet: A Hierarchical Graph Model for Moving Objects in Networks

    Directory of Open Access Journals (Sweden)

    Hengcai Zhang

    2017-03-01

    Full Text Available Existing data models for moving objects in networks are often limited by flexibly controlling the granularity of representing networks and the cost of location updates and do not encompass semantic information, such as traffic states, traffic restrictions and social relationships. In this paper, we aim to fill the gap of traditional network-constrained models and propose a hierarchical graph model called the Geo-Social-Moving model for moving objects in Networks (GSMNet that adopts four graph structures, RouteGraph, SegmentGraph, ObjectGraph and MoveGraph, to represent the underlying networks, trajectories and semantic information in an integrated manner. The bulk of user-defined data types and corresponding operators is proposed to handle moving objects and answer a new class of queries supporting three kinds of conditions: spatial, temporal and semantic information. Then, we develop a prototype system with the native graph database system Neo4Jto implement the proposed GSMNet model. In the experiment, we conduct the performance evaluation using simulated trajectories generated from the BerlinMOD (Berlin Moving Objects Database benchmark and compare with the mature MOD system Secondo. The results of 17 benchmark queries demonstrate that our proposed GSMNet model has strong potential to reduce time-consuming table join operations an d shows remarkable advantages with regard to representing semantic information and controlling the cost of location updates.

  17. Archive Design Based on Planets Inspired Logical Object Model

    DEFF Research Database (Denmark)

    Zierau, Eld; Johansen, Anders

    2008-01-01

    We describe a proposal for a logical data model based on preliminary work the Planets project In OAIS terms the main areas discussed are related to the introduction of a logical data model for representing the past, present and future versions of the digital object associated with the Archival...

  18. Archive Design Based on Planets Inspired Logical Object Model

    DEFF Research Database (Denmark)

    Zierau, Eld; Johansen, Anders

    2008-01-01

    We describe a proposal for a logical data model based on preliminary work the Planets project In OAIS terms the main areas discussed are related to the introduction of a logical data model for representing the past, present and future versions of the digital object associated with the Archival St...

  19. Null Objects in Second Language Acquisition: Grammatical vs. Performance Models

    Science.gov (United States)

    Zyzik, Eve C.

    2008-01-01

    Null direct objects provide a favourable testing ground for grammatical and performance models of argument omission. This article examines both types of models in order to determine which gives a more plausible account of the second language data. The data were collected from second language (L2) learners of Spanish by means of four oral…

  20. A methodology to calibrate pedestrian walker models using multiple objectives

    NARCIS (Netherlands)

    Campanella, M.C.; Daamen, W.; Hoogendoorn, S.P.

    2012-01-01

    The application of walker models to simulate real situations require accuracy in several traffic situations. One strategy to obtain a generic model is to calibrate the parameters in several situations using multiple-objective functions in the optimization process. In this paper, we propose a general

  1. Compact objects from gravitational collapse: an analytical toy model

    Energy Technology Data Exchange (ETDEWEB)

    Malafarina, Daniele [Nazarbayev University, Department of Physics, Astana (Kazakhstan); Joshi, Pankaj S. [Tata Institute of Fundamental Research, Mumbai (India)

    2015-12-15

    We develop here a procedure to obtain regular static configurations resulting from dynamical gravitational collapse of a massive matter cloud in general relativity. Under certain general physical assumptions for the collapsing cloud, we find the class of dynamical models that lead to an equilibrium configuration. To illustrate this, we provide a class of perfect fluid collapse models that lead to a static constant density object as limit. We suggest that similar models might possibly constitute the basis for the description of formation of compact objects in nature. (orig.)

  2. Image-based modeling of objects and human faces

    Science.gov (United States)

    Zhang, Zhengyou

    2000-12-01

    In this paper, provided is an overview of our project on 3D object and face modeling from images taken by a free-moving camera. We strive to advance the state of the art in 3D computer vision, and develop flexible and robust techniques for ordinary users to gain 3D experience from a ste of casually collected 2D images. Applications include product advertisement on the Web, virtual conference, and interactive games. We briefly cover the following topics: camera calibration, stereo rectification, image matching, 3D photo editing, object modeling, and face modeling. Demos on the last three topics will be shown during the conference.

  3. Constructing Multidatabase Collections Using Extended ODMG Object Model

    Directory of Open Access Journals (Sweden)

    Adrian Skehill Mark Roantree

    1999-11-01

    Full Text Available Collections are an important feature in database systems. They provide us with the ability to group objects of interest together, and then to manipulate them in the required fashion. The OASIS project is focused on the construction a multidatabase prototype which uses the ODMG model and a canonical model. As part of this work we have extended the base model to provide a more powerful collection mechanism, and to permit the construction of a federated collection, a collection of heterogenous objects taken from distributed data sources

  4. The Aalborg Model and management by objectives and resources

    DEFF Research Database (Denmark)

    Qvist, Palle; Spliid, Claus Monrad

    2010-01-01

    Model is successful has never been subject to a scientific study. An educational program in an HEI (Higher Education Institution) can be seen and understood as a system managed by objectives (MBO)5 within a given resource frame and based on an “agreement” between the student and the study board....... The student must achieve the objectives decided by the study board and that achievement is then documented with an exam. The study board supports the student with resources which helps them to fulfill the objectives. When the resources are divided into human, material and methodological resources...... it is observed that the allocation of resources to the students in the Aalborg Model differs to the allocation in a more conventional model often used in HEIs. Students in the Aalborg Model are supported with resources which makes a difference. This article focuses on the introduction of project management...

  5. Scale Problems in Geometric-Kinematic Modelling of Geological Objects

    Science.gov (United States)

    Siehl, Agemar; Thomsen, Andreas

    To reveal, to render and to handle complex geological objects and their history of structural development, appropriate geometric models have to be designed. Geological maps, sections, sketches of strain and stress patterns are such well-known analogous two-dimensional models. Normally, the set of observations and measurements supporting them is small in relation to the complexity of the real objects they derive from. Therefore, modelling needs guidance by additional expert knowledge to bridge empty spaces which are not supported by data. Generating digital models of geological objects has some substantial advantages compared to conventional methods, especially if they are supported by an efficient database management system. Consistent 3D models of some complexity can be created, and experiments with time-dependent geological geometries may help to restore coherent sequences of paleogeological states. In order to cope with the problems arising from the combined usage of 3D-geometry models of different scale and resolution within an information system on subsurface geology, geometrical objects need to be annotated with information on the context, within which the geometry model has been established and within which it is valid, and methods supporting storage and retrieval as well as manipulation of geometry at different scales must also take into account and handle such context information to achieve meaningful results. An example is given of a detailed structural study of an open pit lignite mine in the Lower Rhine Basin.

  6. Neighborhood Supported Model Level Fuzzy Aggregation for Moving Object Segmentation.

    Science.gov (United States)

    Chiranjeevi, Pojala; Sengupta, Somnath

    2014-02-01

    We propose a new algorithm for moving object detection in the presence of challenging dynamic background conditions. We use a set of fuzzy aggregated multifeature similarity measures applied on multiple models corresponding to multimodal backgrounds. The algorithm is enriched with a neighborhood-supported model initialization strategy for faster convergence. A model level fuzzy aggregation measure driven background model maintenance ensures more robustness. Similarity functions are evaluated between the corresponding elements of the current feature vector and the model feature vectors. Concepts from Sugeno and Choquet integrals are incorporated in our algorithm to compute fuzzy similarities from the ordered similarity function values for each model. Model updating and the foreground/background classification decision is based on the set of fuzzy integrals. Our proposed algorithm is shown to outperform other multi-model background subtraction algorithms. The proposed approach completely avoids explicit offline training to initialize background model and can be initialized with moving objects also. The feature space uses a combination of intensity and statistical texture features for better object localization and robustness. Our qualitative and quantitative studies illustrate the mitigation of varieties of challenging situations by our approach.

  7. An Empirical Study of Efficiency and Accuracy of Probabilistic Graphical Models

    DEFF Research Database (Denmark)

    Nielsen, Jens Dalgaard; Jaeger, Manfred

    2006-01-01

    In this paper we compare Na\\ii ve Bayes (NB) models, general Bayes Net (BN) models and Probabilistic Decision Graph (PDG) models w.r.t. accuracy and efficiency. As the basis for our analysis we use graphs of size vs. likelihood that show the theoretical capabilities of the models. We also measure...

  8. Space Object Tracking Method Based on a Snake Model

    Science.gov (United States)

    Zhan-wei, Xu; Xin, Wang

    2016-04-01

    In this paper, aiming at the problem of unstable tracking of low-orbit variable and bright space objects, adopting an active contour model, a kind of improved GVF (Gradient Vector Flow) - Snake algorithm is proposed to realize the real-time search of the real object contour on the CCD image. Combined with the Kalman filter for prediction, a new adaptive tracking method is proposed for space objects. Experiments show that this method can overcome the tracking error caused by the fixed window, and improve the tracking robustness.

  9. Study and Application of Objective Evaluation Model on Fabric Style

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on 31 fabric property parameters tested by FAST test system and other test instruments, the principal factors of fabric style are obtained through the principal factor analysis method and computer program. According to the correlation between each parameter and principal factor,the selected positive or negative coefficient, the objective evaluation model of fabric style bas been established based on the percentage of variance. And wool fabrics have been taken for example to show how to use the objective evaluation model for fabric design.

  10. Inventory Allocation for Online Graphical Display Advertising

    CERN Document Server

    Yang, Jian; Vassilvitskii, Sergei; Tomlin, John; Shanmugasundaram, Jayavel; Anastasakos, Tasos; Kennedy, Oliver

    2010-01-01

    We discuss a multi-objective/goal programming model for the allocation of inventory of graphical advertisements. The model considers two types of campaigns: guaranteed delivery (GD), which are sold months in advance, and non-guaranteed delivery (NGD), which are sold using real-time auctions. We investigate various advertiser and publisher objectives such as (a) revenue from the sale of impressions, clicks and conversions, (b) future revenue from the sale of NGD inventory, and (c) "fairness" of allocation. While the first two objectives are monetary, the third is not. This combination of demand types and objectives leads to potentially many variations of our model, which we delineate and evaluate. Our experimental results, which are based on optimization runs using real data sets, demonstrate the effectiveness and flexibility of the proposed model.

  11. Object-oriented model of railway stations operation

    Directory of Open Access Journals (Sweden)

    D.M. Kozachenko

    2013-08-01

    Full Text Available Purpose. The purpose of this article is improvement of the railway stations functional model; it leads to time expenditure cut for formalization technological processes of their work through the use of standard elements of technology. Methodology. Some technological operations, executives and technology objects are considered as main elements of the railway station functioning. Queuing techniques were used as the methods of research, simulation, finite state machines and object-oriented analysis. Findings. Formal data structures were developed as the result of research that can allow simulating the operation of the railway station with any degree of detail. In accordance with the principles of object-oriented approach in the developed model, separate elements of station technology are presented jointly with a description of their behavior. The proposed model is implemented as a software package. Originality. Functional model of railway stations was improved through the application of object-oriented approach to data management. It allow to create libraries of elementary technological processes and reduce time expenditure for formalization the technology of stations work. Practical value. Using of software package that it was developed on the base of proposed model will reduce time expenditure of technologists in order to obtain technical and operational assessment of projected and existing rail stations.

  12. A Graphic Overlay Method for Selection of Osteotomy Site in Chronic Radial Head Dislocation: An Evaluation of 3D-printed Bone Models.

    Science.gov (United States)

    Kim, Hui Taek; Ahn, Tae Young; Jang, Jae Hoon; Kim, Kang Hee; Lee, Sung Jae; Jung, Duk Young

    2017-03-01

    Three-dimensional (3D) computed tomography imaging is now being used to generate 3D models for planning orthopaedic surgery, but the process remains time consuming and expensive. For chronic radial head dislocation, we have designed a graphic overlay approach that employs selected 3D computer images and widely available software to simplify the process of osteotomy site selection. We studied 5 patients (2 traumatic and 3 congenital) with unilateral radial head dislocation. These patients were treated with surgery based on traditional radiographs, but they also had full sets of 3D CT imaging done both before and after their surgery: these 3D CT images form the basis for this study. From the 3D CT images, each patient generated 3 sets of 3D-printed bone models: 2 copies of the preoperative condition, and 1 copy of the postoperative condition. One set of the preoperative models was then actually osteotomized and fixed in the manner suggested by our graphic technique. Arcs of rotation of the 3 sets of 3D-printed bone models were then compared. Arcs of rotation of the 3 groups of bone models were significantly different, with the models osteotomized accordingly to our graphic technique having the widest arcs. For chronic radial head dislocation, our graphic overlay approach simplifies the selection of the osteotomy site(s). Three-dimensional-printed bone models suggest that this approach could improve range of motion of the forearm in actual surgical practice. Level IV-therapeutic study.

  13. Construction and Analysis of Three-dimensional Graphic Model of Single-chain Fv Derived from an Anti-human Placental Acidic Isoferritin Monoclonal Antibody by Computer

    Institute of Scientific and Technical Information of China (English)

    ZHOU Chun; SHEN Guanxin; ZHU Huifen; YANG Jing; ZHANG Yue; FENG Jiannan; SHEN Beifen

    2000-01-01

    A three-dimensional (3D) graphic model of a single-chain Fv (scFv) which was derived from an anti-human placental acidic isoferritin (PAF) monoclonal antibody (Mab) was constructed by a homologous protein-predicting computer algorithm on Silicon graphic computer station.The structure, surface static electricity and hydrophobicity of scFv were investigated. Computer graphic modelling indicated that all regions of scFv including the linker, variable regions of the heavy (VH) and light (VL) chains were suitable. The VH region and the VL region were involved in composing the "hydrophobic pocket". The linker was drifted away VH and VL regions. The complementarity determining regions (CDRs) of VH and VL regions surrounded the "hydrophobic pocket". This study provides a theory basis for improving antibody affinity, investigating antibody structure and analyzing the functions of VH and VL regions in antibody activity.

  14. Object-Oriented Approach to Modeling Units of Pneumatic Systems

    Directory of Open Access Journals (Sweden)

    Yu. V. Kyurdzhiev

    2014-01-01

    Full Text Available The article shows the relevance of the approaches to the object-oriented programming when modeling the pneumatic units (PU.Based on the analysis of the calculation schemes of aggregates pneumatic systems two basic objects, namely a cavity flow and a material point were highlighted.Basic interactions of objects are defined. Cavity-cavity interaction: ex-change of matter and energy with the flows of mass. Cavity-point interaction: force interaction, exchange of energy in the form of operation. Point-point in-teraction: force interaction, elastic interaction, inelastic interaction, and inter-vals of displacement.The authors have developed mathematical models of basic objects and interactions. Models and interaction of elements are implemented in the object-oriented programming.Mathematical models of elements of PU design scheme are implemented in derived from the base class. These classes implement the models of flow cavity, piston, diaphragm, short channel, diaphragm to be open by a given law, spring, bellows, elastic collision, inelastic collision, friction, PU stages with a limited movement, etc.A numerical integration of differential equations for the mathematical models of PU design scheme elements is based on the Runge-Kutta method of the fourth order. On request each class performs a tact of integration i.e. calcu-lation of the coefficient method.The paper presents an integration algorithm of the system of differential equations. All objects of the PU design scheme are placed in a unidirectional class list. Iterator loop cycle initiates the integration tact of all the objects in the list. One in four iteration makes a transition to the next step of integration. Calculation process stops when any object shows a shutdowns flag.The proposed approach was tested in the calculation of a number of PU designs. With regard to traditional approaches to modeling, the authors-proposed method features in easy enhancement, code reuse, high reliability

  15. Modeling and Multi-objective Optimization of Refinery Hydrogen Network

    Institute of Scientific and Technical Information of China (English)

    焦云强; 苏宏业; 廖祖维; 侯卫锋

    2011-01-01

    The demand of hydrogen in oil refinery is increasing as market forces and environmental legislation, so hydrogen network management is becoming increasingly important in refineries. Most studies focused on single-objective optimization problem for the hydrogen network, but few account for the multi-objective optimization problem. This paper presents a novel approach for modeling and multi-objective optimization for hydrogen network in refineries. An improved multi-objective optimization model is proposed based on the concept of superstructure. The optimization includes minimization of operating cost and minimization of investment cost of equipment. The proposed methodology for the multi-objective optimization of hydrogen network takes into account flow rate constraints, pressure constraints, purity constraints, impurity constraints, payback period, etc. The method considers all the feasible connections and subjects this to mixed-integer nonlinear programming (MINLP). A deterministic optimization method is applied to solve this multi-objective optimization problem. Finally, a real case study is intro-duced to illustrate the applicability of the approach.

  16. Adaptive mixture observation models for multiple object tracking

    Institute of Scientific and Technical Information of China (English)

    CUI Peng; SUN LiFeng; YANG ShiQiang

    2009-01-01

    Multiple object tracking (MOT) poses many difficulties to conventional well-studied single object track-ing (SOT) algorithms, such as severe expansion of configuration space, high complexity of motion con-ditions, and visual ambiguities among nearby targets, among which the visual ambiguity problem is the central challenge. In this paper, we address this problem by embedding adaptive mixture observation models (AMOM) into a mixture tracker which is implemented in Particle Filter framework. In AMOM, the extracted multiple features for appearance description are combined according to their discriminative power between ambiguity prone objects, where the discriminability of features are evaluated by online entropy-based feature selection techniques. The induction of AMOM can help to surmount the Incapa-bility of conventional mixture tracker in handling object occlusions, and meanwhile retain its merits of flexibility and high efficiency. The final experiments show significant improvement in MOT scenarios compared with other methods.

  17. C++, objected-oriented programming, and astronomical data models

    Science.gov (United States)

    Farris, A.

    1992-01-01

    Contemporary astronomy is characterized by increasingly complex instruments and observational techniques, higher data collection rates, and large data archives, placing severe stress on software analysis systems. The object-oriented paradigm represents a significant new approach to software design and implementation that holds great promise for dealing with this increased complexity. The basic concepts of this approach will be characterized in contrast to more traditional procedure-oriented approaches. The fundamental features of objected-oriented programming will be discussed from a C++ programming language perspective, using examples familiar to astronomers. This discussion will focus on objects, classes and their relevance to the data type system; the principle of information hiding; and the use of inheritance to implement generalization/specialization relationships. Drawing on the object-oriented approach, features of a new database model to support astronomical data analysis will be presented.

  18. Objective Bayesian Comparison of Constrained Analysis of Variance Models.

    Science.gov (United States)

    Consonni, Guido; Paroli, Roberta

    2016-10-04

    In the social sciences we are often interested in comparing models specified by parametric equality or inequality constraints. For instance, when examining three group means [Formula: see text] through an analysis of variance (ANOVA), a model may specify that [Formula: see text], while another one may state that [Formula: see text], and finally a third model may instead suggest that all means are unrestricted. This is a challenging problem, because it involves a combination of nonnested models, as well as nested models having the same dimension. We adopt an objective Bayesian approach, requiring no prior specification from the user, and derive the posterior probability of each model under consideration. Our method is based on the intrinsic prior methodology, suitably modified to accommodate equality and inequality constraints. Focussing on normal ANOVA models, a comparative assessment is carried out through simulation studies. We also present an application to real data collected in a psychological experiment.

  19. Development of the Object-Oriented Dynamic Simulation Models Using Visual C++ Freeware

    Directory of Open Access Journals (Sweden)

    Alexander I. Kozynchenko

    2016-01-01

    Full Text Available The paper mostly focuses on the methodological and programming aspects of developing a versatile desktop framework to provide the available basis for the high-performance simulation of dynamical models of different kinds and for diverse applications. So the paper gives some basic structure for creating a dynamical simulation model in C++ which is built on the Win32 platform with an interactive multiwindow interface and uses the lightweight Visual C++ Express as a free integrated development environment. The resultant simulation framework could be a more acceptable alternative to other solutions developed on the basis of commercial tools like Borland C++ or Visual C++ Professional, not to mention the domain specific languages and more specialized ready-made software such as Matlab, Simulink, and Modelica. This approach seems to be justified in the case of complex research object-oriented dynamical models having nonstandard structure, relationships, algorithms, and solvers, as it allows developing solutions of high flexibility. The essence of the model framework is shown using a case study of simulation of moving charged particles in the electrostatic field. The simulation model possesses the necessary visualization and control features such as an interactive input, real time graphical and text output, start, stop, and rate control.

  20. Reconstructing Wireframe Model of Curvilinear Objects from Three Orthographic Views

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    An approach for reconstructing wireframe models of curvilinear objects from three orthographic views is discussed in this paper. The method for generating 3D conic edges from 2D projection conic curves is emphasized especially, which is the pivotal work for reconstructing curvilinear objects from three orthographic views. In order to generate 3D conic edges, a five-point method is firstly utilized to obtain the algebraic representations of all 2D-projection curves in each view, and then all algebraic forms are converted to the corresponding geometric forms analytically. Thus the locus of a 3D conic edge can be derived from the geometric forms of the relevant conic curves in three views. Finally, the wireframe model is created after eliminating all redundant elements generated in previous reconstruction process. The approach extends the range of objects to be reconstructed and imposes no restriction on the axis of the quadric surface.

  1. The Aalborg Model and management by objectives and resources

    DEFF Research Database (Denmark)

    Qvist, Palle; Spliid, Claus Monrad

    2010-01-01

    The Aalborg Model has proven to be a successful learning method on at least 3 accounts: high completion rates2, the providing of graduates that are highly valued in the labour market3 and its suitability for young people from homes without an academic background4. But the reason why the Aalborg...... Model is successful has never been subject to a scientific study. An educational program in an HEI (Higher Education Institution) can be seen and understood as a system managed by objectives (MBO)5 within a given resource frame and based on an “agreement” between the student and the study board....... The student must achieve the objectives decided by the study board and that achievement is then documented with an exam. The study board supports the student with resources which helps them to fulfill the objectives. When the resources are divided into human, material and methodological resources...

  2. A process-oriented data model for fuzzy spatial objects

    NARCIS (Netherlands)

    Cheng, T.

    1999-01-01

    The complexity of the natural environment, its polythetic and dynamic character, requires appropriate new methods to represent it in GISs, if only because in the past there has been a tendency to force reality into sharp and static objects. A more generalized spatio-temporal data model is required t

  3. THEORETICAL MODEL OF VIBRATING OBJECT TRANSMITTING NOISE TOWARDS EXTERNAL SOUND

    Institute of Scientific and Technical Information of China (English)

    姚志远

    2002-01-01

    On the basic theory of modal method, the coupling relation between the vibration of objects and external sound was analyzed, the theoretical model solving the vibration and noise was provided, the corresponding calculation formula was given. The calculating results show out that this calculation formula is correct.

  4. Object Oriented Toolbox for Modelling and Simulation of Dynamical Systems

    DEFF Research Database (Denmark)

    Poulsen, Mikael Zebbelin; Wagner, Falko Jens; Thomsen, Per Grove

    1998-01-01

    This paper presents the results of an ongoing project, dealing with design and implementation of a simulation toolbox based on object oriented modelling techniques. The paper describes an experimental implementation of parts of such a toolbox in C++, and discusses the experiences drawn from...

  5. Graphic Methods for Interpreting Longitudinal Dyadic Patterns From Repeated-Measures Actor-Partner Interdependence Models

    DEFF Research Database (Denmark)

    Perry, Nicholas; Baucom, Katherine; Bourne, Stacia

    2017-01-01

    Researchers commonly use repeated-measures actor–partner interdependence models (RM-APIM) to understand how romantic partners change in relation to one another over time. However, traditional interpretations of the results of these models do not fully or correctly capture the dyadic temporal...

  6. Forecasting Multivariate Road Traffic Flows Using Bayesian Dynamic Graphical Models, Splines and Other Traffic Variables

    NARCIS (Netherlands)

    Anacleto, Osvaldo; Queen, Catriona; Albers, Casper J.

    2013-01-01

    Traffic flow data are routinely collected for many networks worldwide. These invariably large data sets can be used as part of a traffic management system, for which good traffic flow forecasting models are crucial. The linear multiregression dynamic model (LMDM) has been shown to be promising for f

  7. The software architecture of climate models: a graphical comparison of CMIP5 and EMICAR5 configurations

    Science.gov (United States)

    Alexander, K.; Easterbrook, S. M.

    2015-04-01

    We analyze the source code of eight coupled climate models, selected from those that participated in the CMIP5 (Taylor et al., 2012) or EMICAR5 (Eby et al., 2013; Zickfeld et al., 2013) intercomparison projects. For each model, we sort the preprocessed code into components and subcomponents based on dependency structure. We then create software architecture diagrams that show the relative sizes of these components/subcomponents and the flow of data between them. The diagrams also illustrate several major classes of climate model design; the distribution of complexity between components, which depends on historical development paths as well as the conscious goals of each institution; and the sharing of components between different modeling groups. These diagrams offer insights into the similarities and differences in structure between climate models, and have the potential to be useful tools for communication between scientists, scientific institutions, and the public.

  8. RANCANGAN DATABASE SUBSISTEM PRODUKSI DENGAN PENDEKATAN SEMANTIC OBJECT MODEL

    Directory of Open Access Journals (Sweden)

    Oviliani Yenty Yuliana

    2002-01-01

    Full Text Available To compete in the global market, business performer who active in industry fields should have and get information quickly and accurately, so they could make the precise decision. Traditional cost accounting system cannot give sufficient information, so many industries shift to Activity-Based Costing system (ABC. ABC system is more complex and need more data that should be save and process, so it should be applied information technology and database than traditional cost accounting system. The development of the software technology recently makes the construction of application program is not problem again. The primary problem is how to design database that presented information quickly and accurately. For that reason it necessary to make the model first. This paper discusses database modelling with semantic object model approach. This model is easier to use and is generate more normal database design than entity relationship model approach. Abstract in Bahasa Indonesia : Dalam persaingan di pasar bebas, para pelaku bisnis di bidang industri dalam membuat suatu keputusan yang tepat memerlukan informasi secara cepat dan akurat. Sistem akuntansi biaya tradisional tidak dapat menyediakan informasi yang memadai, sehingga banyak perusahaan industri yang beralih ke sistem Activity-Based Costing (ABC. Tetapi, sistem ABC merupakan sistem yang kompleks dan memerlukan banyak data yang harus disimpan dan diolah, sehingga harus menggunakan teknologi informasi dan database. Kemajuan di bidang perangkat lunak mengakibatkan pembuatan aplikasi program bukan masalah lagi. Permasalahan utama adalah bagaimana merancang database, agar dapat menyajikan informasi secara cepat dan akurat. Untuk itu, dalam makalah ini dibahas pemodelan database dengan pendekatan semantic object model. Model data ini lebih mudah digunakan dan menghasilkan transformasi yang lebih normal, jika dibandingkan dengan entity relationship model yang umum digunakan. Kata kunci: Sub Sistem

  9. Specifying Usage Control ModelWith Object Constraint Language

    Directory of Open Access Journals (Sweden)

    Min Li

    2013-02-01

    Full Text Available The recent usage control model (UCON is a foundation for next-generation access control models with distinguishing properties of decision continuity and attribute mutability. Constraints in UCON are one of the most important components that have involved in the principle motivations of usage analysis and design. The importance of constraints associated with authorizations, obligations, and conditions in UCON has been recognized but modeling these constraints has not been received much attention. In this paper we use a de facto constraints specification language in software engineering to analyze the constraints in UCON model. We show how to represent constraints with object constraint language (OCL and give out a formalized specification of UCON model which is built from basic constraints, such as authorization predicates, obligation actions and condition requirements. Further, we show the flexibility and expressive capability of this specified UCON model with extensive examples.

  10. A flexible object-based software framework for modeling complex systems with interacting natural and societal processes.

    Energy Technology Data Exchange (ETDEWEB)

    Christiansen, J. H.

    2000-06-15

    The Dynamic Information Architecture System (DIAS) is a flexible, extensible, object-based framework for developing and maintaining complex multidisciplinary simulations. The DIAS infrastructure makes it feasible to build and manipulate complex simulation scenarios in which many thousands of objects can interact via dozens to hundreds of concurrent dynamic processes. The flexibility and extensibility of the DIAS software infrastructure stem mainly from (1) the abstraction of object behaviors, (2) the encapsulation and formalization of model functionality, and (3) the mutability of domain object contents. DIAS simulation objects are inherently capable of highly flexible and heterogeneous spatial realizations. Geospatial graphical representation of DIAS simulation objects is addressed via the GeoViewer, an object-based GIS toolkit application developed at ANL. DIAS simulation capabilities have been extended by inclusion of societal process models generated by the Framework for Addressing Cooperative Extended Transactions (FACET), another object-based framework developed at Argonne National Laboratory. By using FACET models to implement societal behaviors of individuals and organizations within larger DIAS-based natural systems simulations, it has become possible to conveniently address a broad range of issues involving interaction and feedback among natural and societal processes. Example DIAS application areas discussed in this paper include a dynamic virtual oceanic environment, detailed simulation of clinical, physiological, and logistical aspects of health care delivery, and studies of agricultural sustainability of urban centers under environmental stress in ancient Mesopotamia.

  11. Are Face and Object Recognition Independent? A Neurocomputational Modeling Exploration.

    Science.gov (United States)

    Wang, Panqu; Gauthier, Isabel; Cottrell, Garrison

    2016-04-01

    Are face and object recognition abilities independent? Although it is commonly believed that they are, Gauthier et al. [Gauthier, I., McGugin, R. W., Richler, J. J., Herzmann, G., Speegle, M., & VanGulick, A. E. Experience moderates overlap between object and face recognition, suggesting a common ability. Journal of Vision, 14, 7, 2014] recently showed that these abilities become more correlated as experience with nonface categories increases. They argued that there is a single underlying visual ability, v, that is expressed in performance with both face and nonface categories as experience grows. Using the Cambridge Face Memory Test and the Vanderbilt Expertise Test, they showed that the shared variance between Cambridge Face Memory Test and Vanderbilt Expertise Test performance increases monotonically as experience increases. Here, we address why a shared resource across different visual domains does not lead to competition and to an inverse correlation in abilities? We explain this conundrum using our neurocomputational model of face and object processing ["The Model", TM, Cottrell, G. W., & Hsiao, J. H. Neurocomputational models of face processing. In A. J. Calder, G. Rhodes, M. Johnson, & J. Haxby (Eds.), The Oxford handbook of face perception. Oxford, UK: Oxford University Press, 2011]. We model the domain general ability v as the available computational resources (number of hidden units) in the mapping from input to label and experience as the frequency of individual exemplars in an object category appearing during network training. Our results show that, as in the behavioral data, the correlation between subordinate level face and object recognition accuracy increases as experience grows. We suggest that different domains do not compete for resources because the relevant features are shared between faces and objects. The essential power of experience is to generate a "spreading transform" for faces (separating them in representational space) that

  12. A graphical simulation model of the entire DNA process associated with the analysis of short tandem repeat loci.

    Science.gov (United States)

    Gill, Peter; Curran, James; Elliot, Keith

    2005-01-01

    The use of expert systems to interpret short tandem repeat DNA profiles in forensic, medical and ancient DNA applications is becoming increasingly prevalent as high-throughput analytical systems generate large amounts of data that are time-consuming to process. With special reference to low copy number (LCN) applications, we use a graphical model to simulate stochastic variation associated with the entire DNA process starting with extraction of sample, followed by the processing associated with the preparation of a PCR reaction mixture and PCR itself. Each part of the process is modelled with input efficiency parameters. Then, the key output parameters that define the characteristics of a DNA profile are derived, namely heterozygote balance (Hb) and the probability of allelic drop-out p(D). The model can be used to estimate the unknown efficiency parameters, such as pi(extraction). 'What-if' scenarios can be used to improve and optimize the entire process, e.g. by increasing the aliquot forwarded to PCR, the improvement expected to a given DNA profile can be reliably predicted. We demonstrate that Hb and drop-out are mainly a function of stochastic effect of pre-PCR molecular selection. Whole genome amplification is unlikely to give any benefit over conventional PCR for LCN.

  13. Relativistic models of a class of compact objects

    Indian Academy of Sciences (India)

    Rumi Deb; Bikash Chandra Paul; Ramesh Tikekar

    2012-08-01

    A class of general relativistic solutions in isotropic spherical polar coordinates which describe compact stars in hydrostatic equilibrium are discussed. The stellar models obtained here are characterized by four parameters, namely, , , and of geometrical significance related to the inhomogeneity of the matter content of the star. The stellar models obtained using the solutions are physically viable for a wide range of values of the parameters. The physical features of the compact objects taken up here are studied numerically for a number of admissible values of the parameters. Observational stellar mass data are used to construct suitable models of the compact stars.

  14. LIMO EEG: a toolbox for hierarchical LInear MOdeling of ElectroEncephaloGraphic data

    National Research Council Canada - National Science Library

    Pernet, Cyril R; Chauveau, Nicolas; Gaspar, Carl; Rousselet, Guillaume A

    2011-01-01

    ...). LIMO EEG is a Matlab toolbox (EEGLAB compatible) to analyse evoked responses over all space and time dimensions, while accounting for single trial variability using a simple hierarchical linear modelling of the data...

  15. LIMO EEG: A Toolbox for Hierarchical LInear MOdeling of ElectroEncephaloGraphic Data

    National Research Council Canada - National Science Library

    Pernet, Cyril R; Chauveau, Nicolas; Gaspar, Carl; Rousselet, Guillaume A

    2011-01-01

    ...). LIMO EEG is a Matlab toolbox (EEGLAB compatible) to analyse evoked responses over all space and time dimensions, while accounting for single trial variability using a simple hierarchical linear modelling of the data...

  16. Probabilistic Graphical Models for the Analysis and Synthesis of Musical Audio

    Science.gov (United States)

    Hoffmann, Matthew Douglas

    Content-based Music Information Retrieval (MIR) systems seek to automatically extract meaningful information from musical audio signals. This thesis applies new and existing generative probabilistic models to several content-based MIR tasks: timbral similarity estimation, semantic annotation and retrieval, and latent source discovery and separation. In order to estimate how similar two songs sound to one another, we employ a Hierarchical Dirichlet Process (HDP) mixture model to discover a shared representation of the distribution of timbres in each song. Comparing songs under this shared representation yields better query-by-example retrieval quality and scalability than previous approaches. To predict what tags are likely to apply to a song (e.g., "rap," "happy," or "driving music"), we develop the Codeword Bernoulli Average (CBA) model, a simple and fast mixture-of-experts model. Despite its simplicity, CBA performs at least as well as state-of-the-art approaches at automatically annotating songs and finding to what songs in a database a given tag most applies. Finally, we address the problem of latent source discovery and separation by developing two Bayesian nonparametric models, the Shift-Invariant HDP and Gamma Process NMF. These models allow us to discover what sounds (e.g. bass drums, guitar chords, etc.) are present in a song or set of songs and to isolate or suppress individual source. These models' ability to decide how many latent sources are necessary to model the data is particularly valuable in this application, since it is impossible to guess a priori how many sounds will appear in a given song or set of songs. Once they have been fit to data, probabilistic models can also be used to drive the synthesis of new musical audio, both for creative purposes and to qualitatively diagnose what information a model does and does not capture. We also adapt the SIHDP model to create new versions of input audio with arbitrary sample sets, for example, to create

  17. Graphic Enhancement of the Aircraft Penetration Model for Use as an Analytic Tool.

    Science.gov (United States)

    1983-03-01

    CHART NAT IONAL BUREAU OF SIANDARDS ]9t)3 A ,i - NAVAL POSTGRADUATE SCHOOL 0 ; Monterey, California DTICSELE CTI-I SMAY 17 on3 THESIS GRPHIC ...increase the understanding by both the designer and the user of the particular process or interaction that is being modelled. It should provide...the Naval Postgraduate School. The ele- ments of the model evolved from a class project whicli was designed to demonstrate some of the basic techniques

  18. Modeling and Simulation of Grasping of Deformable Objects

    DEFF Research Database (Denmark)

    Fugl, Andreas Rune

    Automated robot solutions have for decades been increasing productivity around the world. They are attractive for being fast, accurate and able to work in dangerous and repetitive environments. In traditional applications the grasped object is kinematically attached to the Tool Center Point....... The purpose of this thesis is to address the modeling and simulation of deformable objects, as applied to robotic grasping and manipulation. The main contributions of this work are: An evaluation of 3D linear elasticity used for robot grasping as implemented by a Finite Difference Method supporting regular...

  19. Modeling Water Shortage Management Using an Object-Oriented Approach

    Science.gov (United States)

    Wang, J.; Senarath, S.; Brion, L.; Niedzialek, J.; Novoa, R.; Obeysekera, J.

    2007-12-01

    As a result of the increasing global population and the resulting urbanization, water shortage issues have received increased attention throughout the world . Water supply has not been able to keep up with increased demand for water, especially during times of drought. The use of an object-oriented (OO) approach coupled with efficient mathematical models is an effective tool in addressing discrepancies between water supply and demand. Object-oriented modeling has been proven powerful and efficient in simulating natural behavior. This research presents a way to model water shortage management using the OO approach. Three groups of conceptual components using the OO approach are designed for the management model. The first group encompasses evaluation of natural behaviors and possible related management options. This evaluation includes assessing any discrepancy that might exist between water demand and supply. The second group is for decision making which includes the determination of water use cutback amount and duration using established criteria. The third group is for implementation of the management options which are restrictions of water usage at a local or regional scale. The loop is closed through a feedback mechanism where continuity in the time domain is established. Like many other regions, drought management is very important in south Florida. The Regional Simulation Model (RSM) is a finite volume, fully integrated hydrologic model used by the South Florida Water Management District to evaluate regional response to various planning alternatives including drought management. A trigger module was developed for RSM that encapsulates the OO approach to water shortage management. Rigorous testing of the module was performed using historical south Florida conditions. Keywords: Object-oriented, modeling, water shortage management, trigger module, Regional Simulation Model

  20. Functional information technology in geometry-graphic training of engineers

    Directory of Open Access Journals (Sweden)

    Irina D. Stolbova

    2017-01-01

    Full Text Available In the last decade, information technology fundamentally changed the design activity and made significant adjustments to the development of design documentation. Electronic drawings and 3d-models appeared instead of paper drawings and the traditional form of the design documentation. Geometric modeling of 3d-technology has replaced the graphic design technology. Standards on the electronic models are introduced. Electronic prototypes and 3d-printing contribute to the spread of rapid prototyping technologies.In these conditions, the task to find the new learning technology, corresponding to the level of development of information technologies and meeting the requirements of modern design and manufacturing technologies, comes to the fore. The purpose of this paper — the analysis of the information technology capabilities in the formation of geometrical-graphic competences, happening in the base of graphic training of students of technical university. Traditionally, basic graphic training of students in the junior university courses consisted in consecutive studying of the descriptive geometry, engineering and computer graphics. Today, the use of integrative approach is relevant, but the role of computer graphics varies considerably. It is not only an object of study, but also a learning tool, the core base of graphic training of students. Computer graphics is an efficient mechanism for the development of students’ spatial thinking. The role of instrumental training of students to the wide use of CAD-systems increases in the solution of educational problems and in the implementation of project tasks, which corresponds to the modern requirements of the professional work of the designer-constructor.In this paper, the following methods are used: system analysis, synthesis, simulation.General geometric-graphic training model of students of innovation orientation, based on the use of a wide range of computer technology is developed. The

  1. 基于图像的图形生成系统中的虚拟摄像机模型%An Virtual Camera Models of Image based Computer Graphics

    Institute of Scientific and Technical Information of China (English)

    王建华; 解凯

    2002-01-01

    The paper discusses the virtual general cameral model. It gives the approach of 3D reconstruction. Bymeans of the model ,the paper formulates the transformation of the general model into simple standard model in com-pute Vision and graphics.

  2. Graphic Methods for Interpreting Longitudinal Dyadic Patterns From Repeated-Measures Actor-Partner Interdependence Models

    DEFF Research Database (Denmark)

    Perry, Nicholas; Baucom, Katherine; Bourne, Stacia

    2017-01-01

    Researchers commonly use repeated-measures actor–partner interdependence models (RM-APIM) to understand how romantic partners change in relation to one another over time. However, traditional interpretations of the results of these models do not fully or correctly capture the dyadic temporal...... patterns estimated in RM-APIM. Interpretation of results from these models largely focuses on the meaning of single-parameter estimates in isolation from all the others. However, considering individual coefficients separately impedes the understanding of how these associations combine to produce...... to improve the understanding and presentation of dyadic patterns of association described by standard RM-APIMs. The current article briefly reviews the conceptual foundations of RM-APIMs, demonstrates how change-as-outcome RM-APIMs and VFDs can aid interpretation of standard RM-APIMs, and provides a tutorial...

  3. A Graphical Simulation of Vapor-Liquid Equilibrium for Use as an Undergraduate Laboratory Experiment and to Demonstrate the Concept of Mathematical Modeling.

    Science.gov (United States)

    Whitman, David L.; Terry, Ronald E.

    1985-01-01

    Demonstrating petroleum engineering concepts in undergraduate laboratories often requires expensive and time-consuming experiments. To eliminate these problems, a graphical simulation technique was developed for junior-level laboratories which illustrate vapor-liquid equilibrium and the use of mathematical modeling. A description of this…

  4. Class Evolution Tree: A Graphical Tool to Support Decisions on the Number of Classes in Exploratory Categorical Latent Variable Modeling for Rehabilitation Research

    Science.gov (United States)

    Kriston, Levente; Melchior, Hanne; Hergert, Anika; Bergelt, Corinna; Watzke, Birgit; Schulz, Holger; von Wolff, Alessa

    2011-01-01

    The aim of our study was to develop a graphical tool that can be used in addition to standard statistical criteria to support decisions on the number of classes in explorative categorical latent variable modeling for rehabilitation research. Data from two rehabilitation research projects were used. In the first study, a latent profile analysis was…

  5. Class Evolution Tree: A Graphical Tool to Support Decisions on the Number of Classes in Exploratory Categorical Latent Variable Modeling for Rehabilitation Research

    Science.gov (United States)

    Kriston, Levente; Melchior, Hanne; Hergert, Anika; Bergelt, Corinna; Watzke, Birgit; Schulz, Holger; von Wolff, Alessa

    2011-01-01

    The aim of our study was to develop a graphical tool that can be used in addition to standard statistical criteria to support decisions on the number of classes in explorative categorical latent variable modeling for rehabilitation research. Data from two rehabilitation research projects were used. In the first study, a latent profile analysis was…

  6. Independencies Induced from a Graphical Markov Model After Marginalization and Conditioning: The R Package ggm

    Directory of Open Access Journals (Sweden)

    Giovanni M. Marchetti

    2006-02-01

    Full Text Available We describe some functions in the R package ggm to derive from a given Markov model, represented by a directed acyclic graph, different types of graphs induced after marginalizing over and conditioning on some of the variables. The package has a few basic functions that find the essential graph, the induced concentration and covariance graphs, and several types of chain graphs implied by the directed acyclic graph (DAG after grouping and reordering the variables. These functions can be useful to explore the impact of latent variables or of selection effects on a chosen data generating model.

  7. A graphical interface based model for wind turbine drive train dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Manwell, J.F.; McGowan, J.G.; Abdulwahid, U.; Rogers, A. [Univ. of Massachusetts, Amherst, MA (United States); McNiff, B. [McNiff Light Industry, Blue Hill, ME (United States)

    1996-12-31

    This paper presents a summary of a wind turbine drive train dynamics code that has been under development at the University of Massachusetts, under National Renewable Energy Laboratory (NREL) support. The code is intended to be used to assist in the proper design and selection of drive train components. This work summarizes the development of the equations of motion for the model, and discusses the method of solution. In addition, a number of comparisons with analytical solutions and experimental field data are given. The summary includes conclusions and suggestions for future work on the model. 13 refs., 10 figs.

  8. Researches on Object Modeling Technology%OMT技术研究

    Institute of Scientific and Technical Information of China (English)

    白君芬

    2012-01-01

      At present, for the study of object-oriented development methods have become more sophisticated. Among them, the Object Modeling Technique (OMT) has better performance in object-oriented software system modeling.Introduced three models of OMT concepts and OMT elaborated the OMT modeling and design process, fully embodies the the OMT techni⁃cal for most applications, the field of software development to provide a more practical, more efficient guarantee.%  目前,对于面向对象开发方法的研究已日益成熟。其中,对象建模技术(OMT)在面向对象的软件系统建模中具有较好的性能。介绍了OMT概念以及OMT的三种模型,阐述了OMT的建模和设计过程,充分体现了OMT技术为大多数应用领域的软件开发提供了一种更实际、更高效的保证。

  9. 基于Object-Z的UML对象模型的形式化%The Formalization of Object Model in UML Based on Object-Z

    Institute of Scientific and Technical Information of China (English)

    杨卫东; 蔡希尧

    2000-01-01

    UML is the main visual Object-oriented modeling language currently, which is used widely and supported by most CASE tools. Comparing with traditional Object-oriented methods, LML describes its semantics and syntax more rigouly by using metamodel and Object Constrain Language. But some important concepts in UML are not specified clearly. This paper presents a formal specification for object model of UML, mainly includes the concepts of class, association, association class, aggregation, and inheritance, etc, so that the analyse, verification, refine, and consistent cheking can be applied to object model.

  10. Computational model for perception of objects and motions

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Perception of objects and motions in the visual scene is one of the basic problems in the visual system. There exist ’What’ and ’Where’ pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives ’where’, for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The compu- tational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation. To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.

  11. Computational model for perception of objects and motions

    Institute of Scientific and Technical Information of China (English)

    YANG WenLu; ZHANG LiQing; MA LiBo

    2008-01-01

    Perception of objects and motions inthe visual scene is one of the basic problems in the visual system. There exist 'What' and 'Where' pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives 'where', for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The compu-tational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation, To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.

  12. Learning a Tracking and Estimation Integrated Graphical Model for Human Pose Tracking.

    Science.gov (United States)

    Zhao, Lin; Gao, Xinbo; Tao, Dacheng; Li, Xuelong

    2015-12-01

    We investigate the tracking of 2-D human poses in a video stream to determine the spatial configuration of body parts in each frame, but this is not a trivial task because people may wear different kinds of clothing and may move very quickly and unpredictably. The technology of pose estimation is typically applied, but it ignores the temporal context and cannot provide smooth, reliable tracking results. Therefore, we develop a tracking and estimation integrated model (TEIM) to fully exploit temporal information by integrating pose estimation with visual tracking. However, joint parsing of multiple articulated parts over time is difficult, because a full model with edges capturing all pairwise relationships within and between frames is loopy and intractable. In previous models, approximate inference was usually resorted to, but it cannot promise good results and the computational cost is large. We overcome these problems by exploring the idea of divide and conquer, which decomposes the full model into two much simpler tractable submodels. In addition, a novel two-step iteration strategy is proposed to efficiently conquer the joint parsing problem. Algorithmically, we design TEIM very carefully so that: 1) it enables pose estimation and visual tracking to compensate for each other to achieve desirable tracking results; 2) it is able to deal with the problem of tracking loss; and 3) it only needs past information and is capable of tracking online. Experiments are conducted on two public data sets in the wild with ground truth layout annotations, and the experimental results indicate the effectiveness of the proposed TEIM framework.

  13. California Reservoir Drought Sensitivity and Exhaustion Risk Using Statistical Graphical Models

    OpenAIRE

    Taeb, Armeen; Reager, John T.; Turmon, Michael; Chandrasekaran, Venkat

    2016-01-01

    The ongoing California drought has highlighted the potential vulnerability of state water management infrastructure to multi-year dry intervals. Due to the high complexity of the network, dynamic storage changes across the California reservoir system have been difficult to model using either conventional statistical or physical approaches. Here, we analyze the interactions of monthly volumes in a network of 55 large California reservoirs, over a period of 136 months from 2004 to 2015, and we ...

  14. An Abstract Data Model for the IDEF0 Graphical Analysis Language

    Science.gov (United States)

    1990-01-11

    models to be constructed. One approach to reducing such ambiguity is to replace or augment free-form text with a more syntactically defined data...whatever level was necessary to ensure an unambiguous interpretation of the system require- ments. Marca and McGowan have written an excellent book which...Manufacturing (ICAM), which is directed towards increasing manufacturing productivity via computer technology, defined a subset of Ross’ Structured Analysis

  15. Software-aided Service Bundling : Intelligent Methods and Tools for Graphical Service Modeling

    OpenAIRE

    Baida, Z.S.

    2006-01-01

    Services, such as insurances, transport, medical treatments and more, have been subject to extensive research business science for decennia. When services are offered, bought or consumed online, we refer to them as e-services. This PhD thesis focuses on an ontological foundation for service description and configuration. Such a conceptual modeling approach facilitates complex e-service scenarios, in which a customer can define a bundle of services, possibly supplied by multiple suppliers, bas...

  16. Object-oriented modeling of patients in a medical federation.

    Science.gov (United States)

    Proctor, M D; Creech, G S

    2001-09-01

    This research explores the development of an object-oriented model to support inter-operation of simulations within a federation for the purpose of conducting medical analysis and training over a distributed infrastructure. The medical federation is referred to as the combat trauma patient simulation system and is composed using high level architecture. The infrastructure contains components that were separately developed and are heterogeneous in nature. This includes a general anatomical computer database capable of generating human injuries, referred to as operational requirements-based casualty assessment, an animated mannequin called the human patient simulator, and other components. The research develops an object model that enables bodily injury data to be shared across the simulation, conducts analysis on that data, and considers possible applications of the technique in expanded medical infrastructures.

  17. High level architecture evolved modular federation object model

    Institute of Scientific and Technical Information of China (English)

    Wang Wenguang; Xu Yongping; Chen Xin; Li Qun; Wang Weiping

    2009-01-01

    To improve the agility, dynamics, composability, reusability, and development efficiency restricted by monolithic federation object model (FOM), a modular FOM is proposed by high level architecture (HLA) evolved product development group. This paper reviews the state-of-the-art of HLA evolved modular FOM. In particular, related concepts, the overall impact on HLA standards, extension principles, and merging processes are discussed. Also permitted and restricted combinations, and merging rules are provided, and the influence on HLA interface specification is given. The comparison between modular FOM and base object model (BOM) is performed to illustrate the importance of their combination. The applications of modular FOM are summarized. Finally, the significance to facilitate compoable simulation both in academia and practice is presented and future directions are pointed out.

  18. High level architecture evolved modular federation object model

    CERN Document Server

    Wang, Wenguang; Chen, Xin; Li, Qun; Wang, Weiping

    2009-01-01

    To improve the agility, dynamics, composability, reusability, and development efficiency restricted by monolithic Federation Object Model (FOM), a modular FOM was proposed by High Level Architecture (HLA) Evolved product development group. This paper reviews the state-of-the-art of HLA Evolved modular FOM. In particular, related concepts, the overall impact on HLA standards, extension principles, and merging processes are discussed. Also permitted and restricted combinations, and merging rules are provided, and the influence on HLA interface specification is given. The comparison between modular FOM and Base Object Model (BOM) is performed to illustrate the importance of their combination. The applications of modular FOM are summarized. Finally, the significance to facilitate composable simulation both in academia and practice is presented and future directions are pointed out.

  19. Formal Model for Data Dependency Analysis between Controls and Actions of a Graphical User Interface

    Directory of Open Access Journals (Sweden)

    SKVORC, D.

    2012-02-01

    Full Text Available End-user development is an emerging computer science discipline that provides programming paradigms, techniques, and tools suitable for users not trained in software engineering. One of the techniques that allow ordinary computer users to develop their own applications without the need to learn a classic programming language is a GUI-level programming based on programming-by-demonstration. To build wizard-based tools that assist users in application development and to verify the correctness of user programs, a computer-supported method for GUI-level data dependency analysis is necessary. Therefore, formal model for GUI representation is needed. In this paper, we present a finite state machine for modeling the data dependencies between GUI controls and GUI actions. Furthermore, we present an algorithm for automatic construction of finite state machine for arbitrary GUI application. We show that proposed state aggregation scheme successfully manages state explosion in state machine construction algorithm, which makes the model applicable for applications with complex GUIs.

  20. Software system for three-dimensional visualization of micro-object surfaces for scanning probe microscopy using OpenGL graphics library

    Science.gov (United States)

    Vakuliuk, Nickolay V.

    2002-07-01

    The software system for 3D visualization of micro objects surfaces for scanning probe microscopy is developed. The system is used as a software part of software/hardware complex of scanning probe microscope (SPM) and its kinds. The system represent the results of microscope work in 3D- view. System has convenient GUI and high level of functionality in modes of visualization and in saving result images. Program allows to operate with image of surface in real time by performing scaling, rotation, moving, setting of lighting and color values, setting level of detail for surface in real time by performing scaling, rotation, moving, setting of lighting and color values, setting level of detail for surface rendering. This viewer works together with another part of software system that is responsible for controlling the SPM. The program also can be used as an independent view of scanning probe microscope files.

  1. A Physical Model of Phaethon, a Near-Sun Object

    Science.gov (United States)

    Boice, Daniel C.; Benkhoff, J.; Huebner, W. F.

    2013-10-01

    Physico-chemical modeling is central to understand the important physical processes that occur in small solar system bodies. We have developed a computer code, SUSEI, that includes the physico-chemical processes relevant to comets within a global modeling framework to better understand observations and in situ measurements and to provide valuable insights into the intrinsic properties of their nuclei. SUISEI includes a 3D model of gas and heat transport in porous sub-surface layers in the interior of the nucleus. We have successfully used this model in our study of previous comets at normal heliocentric distances [e.g., 46P/Wirtanen, D/1993 F2 (Shoemaker-Levy 9)]. We have adapted SUISEI to model near-Sun objects to reveal significant differences in the chemistry and dynamics of their comae (atmosphere) with comets that don’t closely approach the Sun. At small heliocentric distances, temperatures are high enough to vaporize surface materials and dust, forming a source of gas. Another important question concerns the energy balance at the body’s surface, namely what fraction of incident energy will be conducted into the interior versus that used for sublimation. This is important to understand if the interior stays cold and is relatively unaltered during each perihelion passage or is significantly devolatilized. This also bears upon the regimes where sublimation and ablation due to ram pressure dominate in the erosion or eventual destruction of sun-grazers. The resulting model will be an important tool for studying sungrazing comets and other near-Sun objects. We will present results on the application of SUISEI to the near-Sun object, Phaethon. Acknowledgements: We appreciate support from the SwRI IR&D and the NSF Planetary Astronomy Programs.

  2. The recognition of graphical patterns invariant to geometrical transformation of the models

    Science.gov (United States)

    Ileană, Ioan; Rotar, Corina; Muntean, Maria; Ceuca, Emilian

    2010-11-01

    In case that a pattern recognition system is used for images recognition (in robot vision, handwritten recognition etc.), the system must have the capacity to identify an object indifferently of its size or position in the image. The problem of the invariance of recognition can be approached in some fundamental modes. One may apply the similarity criterion used in associative recall. The original pattern is replaced by a mathematical transform that assures some invariance (e.g. the value of two-dimensional Fourier transformation is translation invariant, the value of Mellin transformation is scale invariant). In a different approach the original pattern is represented through a set of features, each of them being coded indifferently of the position, orientation or position of the pattern. Generally speaking, it is easy to obtain invariance in relation with one transformation group, but is difficult to obtain simultaneous invariance at rotation, translation and scale. In this paper we analyze some methods to achieve invariant recognition of images, particularly for digit images. A great number of experiments are due and the conclusions are underplayed in the paper.

  3. MODELING OF CONVECTIVE STREAMS IN PNEUMOBASIC OBJECTS (Part 2

    Directory of Open Access Journals (Sweden)

    B. M. Khroustalev

    2015-01-01

    Full Text Available The article presents modeling for investigation of aerodynamic processes on area sections (including a group of complex constructional works for different regimes of drop and wind streams  and  temperature  conditions  and  in  complex  constructional  works  (for  different regimes of heating and ventilation. There were developed different programs for innovation problems solution in the field of heat and mass exchange in three-dimensional space of pres- sures-speeds-temperatures of оbjects.The field of uses of pneumobasic objects: construction and roof of tennis courts, hockey pitches, swimming pools , and also exhibitions’ buildings, circus buildings, cafes, aqua parks, studios, mobile objects of medical purposes, hangars, garages, construction sites, service sta- tions and etc. Advantages of such objects are the possibility and simplicity of multiple instal- lation and demolition works. Their large-scale implementation is determined by temperature- moisture conditions under the shells.Analytical and calculating researches, real researches of thermodynamic parameters of heat and mass exchange, multifactorial processes of air in pneumobasic objects, their shells in a wide range of climatic parameters of air (January – December in the Republic of Belarus, in many geographical latitudes of many countries have shown that the limit of the possibility of optimizing wind loads, heat flow, acoustic effects is infinite (sports, residential, industrial, warehouse, the military-technical units (tanks, airplanes, etc.. In modeling of convective flows in pneumobasic objects (part 1 there are processes with higher dynamic parameters of the air flow for the characteristic pneumobasic object, carried out the calculation of the velocity field, temperature, pressure at the speed of access of air through the inflow holes up to 5 m/sec at the moments of times (20, 100, 200, 400 sec. The calculation was performed using the developed mathematical

  4. THE INVESTMENT MODEL OF THE CONSTRUCTION OF PUBLIC OBJECTS

    Directory of Open Access Journals (Sweden)

    Reperger Šandor

    2009-11-01

    Full Text Available One of the possible models of the construction and use of sports objects, especi- ally indoor facilities (sports centres, halls, swimming pools, shooting alleys and others is the cooperation of the public and private sector, by the investment model of PPP (Pu- blic-Private Partnership. PPP (Public-Private Partnership construction is the new form of securing civil works, already known in the developed countries, in which the job of planning, construc- tion, functioning and financing is done by the private sector – in the scope of a precisely elaborated cooperation with the state. The state engages the private sector for the administering of the civil works. By public adverstisements and contests they will find the investors who accept the administe- ring of certain public works by themselves or with the help of project partners with their own resources (with 60-85% of bank loans, secure the conditions for conducting certain services (by using the objects, halls, etc until the expiration of the agreed deadline. The essence of PPP construction is the fact that an investor from the private sec- tor, chosen through a contest, realizes the project using its own means. The object beco- mes the property of the investor and it secures the regular functioning of the object with exclusive rights. The income from the functioning belongs to the investor, in return the costs of the functioning of the object, the upkeep, as well as the costs of the personnel and public utilities are the responsibility of the investor. The public use of the object is realised by the means that the authorised ministry and the partner from the contest in an agreement of the realization and functioning of the object accurately define the time of maintenance and the duration of the services on the behalf of social interest. From the time specified in the agreement the investor doesn’t charge precisely defined users for general and specific services. As Sebia, with all its

  5. A Multi-objective Procedure for Efficient Regression Modeling

    CERN Document Server

    Sinha, Ankur; Kuosmanen, Timo

    2012-01-01

    Variable selection is recognized as one of the most critical steps in statistical modeling. The problems encountered in engineering and social sciences are commonly characterized by over-abundance of explanatory variables, non-linearities and unknown interdependencies between the regressors. An added difficulty is that the analysts may have little or no prior knowledge on the relative importance of the variables. To provide a robust method for model selection, this paper introduces a technique called the Multi-objective Genetic Algorithm for Variable Selection (MOGA-VS) which provides the user with an efficient set of regression models for a given data-set. The algorithm considers the regression problem as a two objective task, where the purpose is to choose those models over the other which have less number of regression coefficients and better goodness of fit. In MOGA-VS, the model selection procedure is implemented in two steps. First, we generate the frontier of all efficient or non-dominated regression m...

  6. Object-Oriented MDAO Tool with Aeroservoelastic Model Tuning Capability

    Science.gov (United States)

    Pak, Chan-gi; Li, Wesley; Lung, Shun-fat

    2008-01-01

    An object-oriented multi-disciplinary analysis and optimization (MDAO) tool has been developed at the NASA Dryden Flight Research Center to automate the design and analysis process and leverage existing commercial as well as in-house codes to enable true multidisciplinary optimization in the preliminary design stage of subsonic, transonic, supersonic and hypersonic aircraft. Once the structural analysis discipline is finalized and integrated completely into the MDAO process, other disciplines such as aerodynamics and flight controls will be integrated as well. Simple and efficient model tuning capabilities based on optimization problem are successfully integrated with the MDAO tool. More synchronized all phases of experimental testing (ground and flight), analytical model updating, high-fidelity simulations for model validation, and integrated design may result in reduction of uncertainties in the aeroservoelastic model and increase the flight safety.

  7. Rendering Falling Leaves on Graphics Hardware

    Directory of Open Access Journals (Sweden)

    Marcos Balsa

    2008-04-01

    Full Text Available There is a growing interest in simulating natural phenomena in computer graphics applications. Animating natural scenes in real time is one of the most challenging problems due to the inherent complexity of their structure, formed by millions of geometric entities, and the interactions that happen within. An example of natural scenario that is needed for games or simulation programs are forests. Forests are difficult to render because the huge amount of geometric entities and the large amount of detail to be represented. Moreover, the interactions between the objects (grass, leaves and external forces such as wind are complex to model. In this paper we concentrate in the rendering of falling leaves at low cost. We present a technique that exploits graphics hardware in order to render thousands of leaves with different falling paths in real time and low memory requirements.

  8. Telehealth in Schools Using a Systematic Educational Model Based on Fiction Screenplays, Interactive Documentaries, and Three-Dimensional Computer Graphics.

    Science.gov (United States)

    Miranda, Diogo Julien; Wen, Chao Lung

    2017-07-18

    Preliminary studies suggest the need of a global vision in academic reform, leading to education re-invention. This would include problem-based education using transversal topics, developing of thinking skills, social interaction, and information-processing skills. We aimed to develop a new educational model in health with modular components to be broadcast and applied as a tele-education course. We developed a systematic model based on a "Skills and Goals Matrix" to adapt scientific contents on fictional screenplays, three-dimensional (3D) computer graphics of the human body, and interactive documentaries. We selected 13 topics based on youth vulnerabilities in Brazil to be disseminated through a television show with 15 episodes. We developed scientific content for each theme, naturally inserting it into screenplays, together with 3D sequences and interactive documentaries. The modular structure was then adapted to a distance-learning course. The television show was broadcast on national television for two consecutive years to an estimated audience of 30 million homes, and ever since on an Internet Protocol Television (IPTV) channel. It was also reorganized as a tele-education course for 2 years, reaching 1,180 subscriptions from all 27 Brazilian states, resulting in 240 graduates. Positive results indicate the feasibility, acceptability, and effectiveness of a model of modular entertainment audio-visual productions using health and education integrated concepts. This structure also allowed the model to be interconnected with other sources and applied as tele-education course, educating, informing, and stimulating the behavior change. Future works should reinforce this joint structure of telehealth, communication, and education.

  9. LIMO EEG: A Toolbox for Hierarchical LInear MOdeling of ElectroEncephaloGraphic Data

    Directory of Open Access Journals (Sweden)

    Cyril R. Pernet

    2011-01-01

    Full Text Available Magnetic- and electric-evoked brain responses have traditionally been analyzed by comparing the peaks or mean amplitudes of signals from selected channels and averaged across trials. More recently, tools have been developed to investigate single trial response variability (e.g., EEGLAB and to test differences between averaged evoked responses over the entire scalp and time dimensions (e.g., SPM, Fieldtrip. LIMO EEG is a Matlab toolbox (EEGLAB compatible to analyse evoked responses over all space and time dimensions, while accounting for single trial variability using a simple hierarchical linear modelling of the data. In addition, LIMO EEG provides robust parametric tests, therefore providing a new and complementary tool in the analysis of neural evoked responses.

  10. LIMO EEG: a toolbox for hierarchical LInear MOdeling of ElectroEncephaloGraphic data.

    Science.gov (United States)

    Pernet, Cyril R; Chauveau, Nicolas; Gaspar, Carl; Rousselet, Guillaume A

    2011-01-01

    Magnetic- and electric-evoked brain responses have traditionally been analyzed by comparing the peaks or mean amplitudes of signals from selected channels and averaged across trials. More recently, tools have been developed to investigate single trial response variability (e.g., EEGLAB) and to test differences between averaged evoked responses over the entire scalp and time dimensions (e.g., SPM, Fieldtrip). LIMO EEG is a Matlab toolbox (EEGLAB compatible) to analyse evoked responses over all space and time dimensions, while accounting for single trial variability using a simple hierarchical linear modelling of the data. In addition, LIMO EEG provides robust parametric tests, therefore providing a new and complementary tool in the analysis of neural evoked responses.

  11. Graphical modeling of gene expression in monocytes suggests molecular mechanisms explaining increased atherosclerosis in smokers.

    Directory of Open Access Journals (Sweden)

    Ricardo A Verdugo

    Full Text Available Smoking is a risk factor for atherosclerosis with reported widespread effects on gene expression in circulating blood cells. We hypothesized that a molecular signature mediating the relation between smoking and atherosclerosis may be found in the transcriptome of circulating monocytes. Genome-wide expression profiles and counts of atherosclerotic plaques in carotid arteries were collected in 248 smokers and 688 non-smokers from the general population. Patterns of co-expressed genes were identified by Independent Component Analysis (ICA and network structure of the pattern-specific gene modules was inferred by the PC-algorithm. A likelihood-based causality test was implemented to select patterns that fit models containing a path "smoking→gene expression→plaques". Robustness of the causal inference was assessed by bootstrapping. At a FDR ≤0.10, 3,368 genes were associated to smoking or plaques, of which 93% were associated to smoking only. SASH1 showed the strongest association to smoking and PPARG the strongest association to plaques. Twenty-nine gene patterns were identified by ICA. Modules containing SASH1 and PPARG did not show evidence for the "smoking→gene expression→plaques" causality model. Conversely, three modules had good support for causal effects and exhibited a network topology consistent with gene expression mediating the relation between smoking and plaques. The network with the strongest support for causal effects was connected to plaques through SLC39A8, a gene with known association to HDL-cholesterol and cellular uptake of cadmium from tobacco, while smoking was directly connected to GAS6, a gene reported to have anti-inflammatory effects in atherosclerosis and to be up-regulated in the placenta of women smoking during pregnancy. Our analysis of the transcriptome of monocytes recovered genes relevant for association to smoking and atherosclerosis, and connected genes that before, were only studied in separate contexts

  12. Analysis and simulation of industrial distillation processes using a graphical system design model

    Science.gov (United States)

    Boca, Maria Loredana; Dobra, Remus; Dragos, Pasculescu; Ahmad, Mohammad Ayaz

    2016-12-01

    The separation column used for experimentations one model can be configured in two ways: one - two columns of different diameters placed one within the other extension, and second way, one column with set diameter [1], [2]. The column separates the carbon isotopes based on the cryogenic distillation of pure carbon monoxide, which is fed at a constant flow rate as a gas through the feeding system [1],[2]. Based on numerical control systems used in virtual instrumentation was done some simulations of the distillation process in order to obtain of the isotope 13C at high concentrations. The experimental installation for cryogenic separation can be configured from the point of view of the separation column in two ways: Cascade - two columns of different diameters and placed one in the extension of the other column, and second one column with a set diameter. It is proposed that this installation is controlled to achieve data using a data acquisition tool and professional software that will process information from the isotopic column based on a logical dedicated algorithm. Classical isotopic column will be controlled automatically, and information about the main parameters will be monitored and properly display using one program. Take in consideration the very-low operating temperature, an efficient thermal isolation vacuum jacket is necessary. Since the "elementary separation ratio" [2] is very close to unity in order to raise the (13C) isotope concentration up to a desired level, a permanent counter current of the liquid-gaseous phases of the carbon monoxide is created by the main elements of the equipment: the boiler in the bottom-side of the column and the condenser in the top-side.

  13. WE-E-BRE-05: Ensemble of Graphical Models for Predicting Radiation Pneumontis Risk

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S; Ybarra, N; Jeyaseelan, K; El Naqa, I [McGill University, Montreal, Quebec (Canada); Faria, S; Kopek, N [Montreal General Hospital, Montreal, Quebec (Canada)

    2014-06-15

    Purpose: We propose a prior knowledge-based approach to construct an interaction graph of biological and dosimetric radiation pneumontis (RP) covariates for the purpose of developing a RP risk classifier. Methods: We recruited 59 NSCLC patients who received curative radiotherapy with minimum 6 month follow-up. 16 RP events was observed (CTCAE grade ≥2). Blood serum was collected from every patient before (pre-RT) and during RT (mid-RT). From each sample the concentration of the following five candidate biomarkers were taken as covariates: alpha-2-macroglobulin (α2M), angiotensin converting enzyme (ACE), transforming growth factor β (TGF-β), interleukin-6 (IL-6), and osteopontin (OPN). Dose-volumetric parameters were also included as covariates. The number of biological and dosimetric covariates was reduced by a variable selection scheme implemented by L1-regularized logistic regression (LASSO). Posterior probability distribution of interaction graphs between the selected variables was estimated from the data under the literature-based prior knowledge to weight more heavily the graphs that contain the expected associations. A graph ensemble was formed by averaging the most probable graphs weighted by their posterior, creating a Bayesian Network (BN)-based RP risk classifier. Results: The LASSO selected the following 7 RP covariates: (1) pre-RT concentration level of α2M, (2) α2M level mid- RT/pre-RT, (3) pre-RT IL6 level, (4) IL6 level mid-RT/pre-RT, (5) ACE mid-RT/pre-RT, (6) PTV volume, and (7) mean lung dose (MLD). The ensemble BN model achieved the maximum sensitivity/specificity of 81%/84% and outperformed univariate dosimetric predictors as shown by larger AUC values (0.78∼0.81) compared with MLD (0.61), V20 (0.65) and V30 (0.70). The ensembles obtained by incorporating the prior knowledge improved classification performance for the ensemble size 5∼50. Conclusion: We demonstrated a probabilistic ensemble method to detect robust associations between

  14. Objective Characterization of Snow Microstructure for Microwave Emission Modeling

    Science.gov (United States)

    Durand, Michael; Kim, Edward J.; Molotch, Noah P.; Margulis, Steven A.; Courville, Zoe; Malzler, Christian

    2012-01-01

    Passive microwave (PM) measurements are sensitive to the presence and quantity of snow, a fact that has long been used to monitor snowcover from space. In order to estimate total snow water equivalent (SWE) within PM footprints (on the order of approx 100 sq km), it is prerequisite to understand snow microwave emission at the point scale and how microwave radiation integrates spatially; the former is the topic of this paper. Snow microstructure is one of the fundamental controls on the propagation of microwave radiation through snow. Our goal in this study is to evaluate the prospects for driving the Microwave Emission Model of Layered Snowpacks with objective measurements of snow specific surface area to reproduce measured brightness temperatures when forced with objective measurements of snow specific surface area (S). This eliminates the need to treat the grain size as a free-fit parameter.

  15. Advertising Model of Residential Real Estate Object in Lithuania

    Directory of Open Access Journals (Sweden)

    Jelena Mazaj

    2012-07-01

    Full Text Available Since the year 2000, during the period of economic growth, the real estate market has been rapidly expanding. During this period advertising of real estate objects was implemented using one set of similar channels (press advertising, Internet advertising, leaflets with contact information of real estate agents and others, however the start of the economic recession has intensified the competition in the market and forced companies to search for new advertising means or to diversify the advertising package. The article presents real estate property, as a product, one of the marketing components – including advertising, conclusions and suggestions based on conducted surveys and a model for advertising the residential real estate objects.Article in Lithuanian

  16. Object tracking with double-dictionary appearance model

    Science.gov (United States)

    Lv, Li; Fan, Tanghuai; Sun, Zhen; Wang, Jun; Xu, Lizhong

    2016-08-01

    Dictionary learning has previously been applied to target tracking across images in video sequences. However, most trackers that use dictionary learning neglect to make optimal use of the representation coefficients to locate the target. This increases the possibility of losing the target in the presence of similar objects, or in case occlusion or rotation occurs. We propose an effective object-tracking method based on a double-dictionary appearance model under a particle filter framework. We employ a double dictionary by training template features to represent the target. This representation not only exploits the relationship between the candidate and target but also represents the target more accurately with minimal residual. We also introduce a simple and effective strategy to update the template to reduce the influence of occlusion, rotation, and drift. Experiments on challenging sequences showed that the proposed algorithm performs favorably against the state-of-the-art methods in terms of several comparative metrics.

  17. A smooth particle hydrodynamics code to model collisions between solid, self-gravitating objects

    Science.gov (United States)

    Schäfer, C.; Riecker, S.; Maindl, T. I.; Speith, R.; Scherrer, S.; Kley, W.

    2016-05-01

    Context. Modern graphics processing units (GPUs) lead to a major increase in the performance of the computation of astrophysical simulations. Owing to the different nature of GPU architecture compared to traditional central processing units (CPUs) such as x86 architecture, existing numerical codes cannot be easily migrated to run on GPU. Here, we present a new implementation of the numerical method smooth particle hydrodynamics (SPH) using CUDA and the first astrophysical application of the new code: the collision between Ceres-sized objects. Aims: The new code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. Methods: We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations and is treated by the use of a Barnes-Hut tree. Results: We find an impressive performance gain using NVIDIA consumer devices compared to our existing OpenMP code. The new code is freely available to the community upon request. If you are interested in our CUDA SPH code miluphCUDA, please write an email to Christoph Schäfer. miluphCUDA is the CUDA port of miluph. miluph is pronounced [maßl2v]. We do not support the use of the code for military purposes.

  18. A smooth particle hydrodynamics code to model collisions between solid, self-gravitating objects

    CERN Document Server

    Schäfer, Christoph M; Maindl, Thomas I; Speith, Roland; Scherrer, Samuel; Kley, Wilhelm

    2016-01-01

    Modern graphics processing units (GPUs) lead to a major increase in the performance of the computation of astrophysical simulations. Owing to the different nature of GPU architecture compared to traditional central processing units (CPUs) such as x86 architecture, existing numerical codes cannot be easily migrated to run on GPU. Here, we present a new implementation of the numerical method smooth particle hydrodynamics (SPH) using CUDA and the first astrophysical application of the new code: the collision between Ceres-sized objects. The new code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations and is treated by the use of a Barnes-Hut tree. We find an impressive performance gain using NVIDIA consumer devices compared to ou...

  19. Computational Data Modeling for Network-Constrained Moving Objects

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Speicys, L.; Kligys, A.

    2003-01-01

    Advances in wireless communications, positioning technology, and other hardware technologies combine to enable a range of applications that use a mobile user’s geo-spatial data to deliver online, location-enhanced services, often referred to as location-based services. Assuming that the service...... users are constrained to a transportation network, this paper develops data structures that model road networks, the mobile users, and stationary objects of interest. The proposed framework encompasses two supplementary road network representations, namely a two-dimensional representation and a graph...

  20. An Improved Direction Relation Detection Model for Spatial Objects

    Institute of Scientific and Technical Information of China (English)

    FENG Yucai; YI Baolin

    2004-01-01

    Direction is a common spatial concept that is used in our daily life. It is frequently used as a selection condition in spatial queries. As a result, it is important for spatial databases to provide a mechanism for modeling and processing direction queries and reasoning. Depending on the direction relation matrix, an inverted direction relation matrix and the concept of direction pre- dominance are proposed to improve the detection of direction relation between objects. Direction predicates of spatial systems are also extended. These techniques can improve the veracity of direction queries and reasoning. Experiments show excellent efficiency and performance in view of direction queries.

  1. 12th International Conference on Computer Graphics Theory and Applications

    CERN Document Server

    2017-01-01

    The International Conference on Computer Graphics Theory and Applications aims at becoming a major point of contact between researchers, engineers and practitioners in Computer Graphics. The conference will be structured along five main tracks, covering different aspects related to Computer Graphics, from Modelling to Rendering, including Animation, Interactive Environments and Social Agents In Computer Graphics.

  2. Interactive computer graphics and its role in control system design of large space structures

    Science.gov (United States)

    Reddy, A. S. S. R.

    1985-01-01

    This paper attempts to show the relevance of interactive computer graphics in the design of control systems to maintain attitude and shape of large space structures to accomplish the required mission objectives. The typical phases of control system design, starting from the physical model such as modeling the dynamics, modal analysis, and control system design methodology are reviewed and the need of the interactive computer graphics is demonstrated. Typical constituent parts of large space structures such as free-free beams and free-free plates are used to demonstrate the complexity of the control system design and the effectiveness of the interactive computer graphics.

  3. A knowledge discovery object model API for Java

    Directory of Open Access Journals (Sweden)

    Jones Steven JM

    2003-10-01

    Full Text Available Abstract Background Biological data resources have become heterogeneous and derive from multiple sources. This introduces challenges in the management and utilization of this data in software development. Although efforts are underway to create a standard format for the transmission and storage of biological data, this objective has yet to be fully realized. Results This work describes an application programming interface (API that provides a framework for developing an effective biological knowledge ontology for Java-based software projects. The API provides a robust framework for the data acquisition and management needs of an ontology implementation. In addition, the API contains classes to assist in creating GUIs to represent this data visually. Conclusions The Knowledge Discovery Object Model (KDOM API is particularly useful for medium to large applications, or for a number of smaller software projects with common characteristics or objectives. KDOM can be coupled effectively with other biologically relevant APIs and classes. Source code, libraries, documentation and examples are available at http://www.bcgsc.ca/bioinfo/software.

  4. Interactive object modelling based on piecewise planar surface patches.

    Science.gov (United States)

    Prankl, Johann; Zillich, Michael; Vincze, Markus

    2013-06-01

    Detecting elements such as planes in 3D is essential to describe objects for applications such as robotics and augmented reality. While plane estimation is well studied, table-top scenes exhibit a large number of planes and methods often lock onto a dominant plane or do not estimate 3D object structure but only homographies of individual planes. In this paper we introduce MDL to the problem of incrementally detecting multiple planar patches in a scene using tracked interest points in image sequences. Planar patches are reconstructed and stored in a keyframe-based graph structure. In case different motions occur, separate object hypotheses are modelled from currently visible patches and patches seen in previous frames. We evaluate our approach on a standard data set published by the Visual Geometry Group at the University of Oxford [24] and on our own data set containing table-top scenes. Results indicate that our approach significantly improves over the state-of-the-art algorithms.

  5. Interactive object modelling based on piecewise planar surface patches☆

    Science.gov (United States)

    Prankl, Johann; Zillich, Michael; Vincze, Markus

    2013-01-01

    Detecting elements such as planes in 3D is essential to describe objects for applications such as robotics and augmented reality. While plane estimation is well studied, table-top scenes exhibit a large number of planes and methods often lock onto a dominant plane or do not estimate 3D object structure but only homographies of individual planes. In this paper we introduce MDL to the problem of incrementally detecting multiple planar patches in a scene using tracked interest points in image sequences. Planar patches are reconstructed and stored in a keyframe-based graph structure. In case different motions occur, separate object hypotheses are modelled from currently visible patches and patches seen in previous frames. We evaluate our approach on a standard data set published by the Visual Geometry Group at the University of Oxford [24] and on our own data set containing table-top scenes. Results indicate that our approach significantly improves over the state-of-the-art algorithms. PMID:24511219

  6. Graphic Turbulence Guidance

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Forecast turbulence hazards identified by the Graphical Turbulence Guidance algorithm. The Graphical Turbulence Guidance product depicts mid-level and upper-level...

  7. Repellency Awareness Graphic

    Science.gov (United States)

    Companies can apply to use the voluntary new graphic on product labels of skin-applied insect repellents. This graphic is intended to help consumers easily identify the protection time for mosquitoes and ticks and select appropriately.

  8. Graphical Turbulence Guidance - Composite

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Forecast turbulence hazards identified by the Graphical Turbulence Guidance algorithm. The Graphical Turbulence Guidance product depicts mid-level and upper-level...

  9. Mathematical model of innovative sustainability “green” construction object

    Directory of Open Access Journals (Sweden)

    Slesarev Michail

    2016-01-01

    Full Text Available The paper addresses the issue of finding sustainability of “green” innovative processes in interaction between construction activities and the environment. The problem of today’s construction science is stated as comprehensive integration and automation of natural and artificial intellects within systems that ensure environmental safety of construction based on innovative sustainability of “green” technologies in the life environment, and “green” innovative products. The suggested solution to the problem should formalize sustainability models and methods for interpretation of optimization mathematical modeling problems respective to problems of environmental-based innovative process management, adapted to construction of “green” objects, “green” construction technologies, “green” innovative materials and structures.

  10. Models for predicting objective function weights in prostate cancer IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Boutilier, Justin J., E-mail: j.boutilier@mail.utoronto.ca; Lee, Taewoo [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King’s College Road, Toronto, Ontario M5S 3G8 (Canada); Craig, Tim [Radiation Medicine Program, UHN Princess Margaret Cancer Centre, 610 University of Avenue, Toronto, Ontario M5T 2M9, Canada and Department of Radiation Oncology, University of Toronto, 148 - 150 College Street, Toronto, Ontario M5S 3S2 (Canada); Sharpe, Michael B. [Radiation Medicine Program, UHN Princess Margaret Cancer Centre, 610 University of Avenue, Toronto, Ontario M5T 2M9 (Canada); Department of Radiation Oncology, University of Toronto, 148 - 150 College Street, Toronto, Ontario M5S 3S2 (Canada); Techna Institute for the Advancement of Technology for Health, 124 - 100 College Street, Toronto, Ontario M5G 1P5 (Canada); Chan, Timothy C. Y. [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King’s College Road, Toronto, Ontario M5S 3G8, Canada and Techna Institute for the Advancement of Technology for Health, 124 - 100 College Street, Toronto, Ontario M5G 1P5 (Canada)

    2015-04-15

    Purpose: To develop and evaluate the clinical applicability of advanced machine learning models that simultaneously predict multiple optimization objective function weights from patient geometry for intensity-modulated radiation therapy of prostate cancer. Methods: A previously developed inverse optimization method was applied retrospectively to determine optimal objective function weights for 315 treated patients. The authors used an overlap volume ratio (OV) of bladder and rectum for different PTV expansions and overlap volume histogram slopes (OVSR and OVSB for the rectum and bladder, respectively) as explanatory variables that quantify patient geometry. Using the optimal weights as ground truth, the authors trained and applied three prediction models: logistic regression (LR), multinomial logistic regression (MLR), and weighted K-nearest neighbor (KNN). The population average of the optimal objective function weights was also calculated. Results: The OV at 0.4 cm and OVSR at 0.1 cm features were found to be the most predictive of the weights. The authors observed comparable performance (i.e., no statistically significant difference) between LR, MLR, and KNN methodologies, with LR appearing to perform the best. All three machine learning models outperformed the population average by a statistically significant amount over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and dose to the bladder, rectum, CTV, and PTV. When comparing the weights directly, the LR model predicted bladder and rectum weights that had, on average, a 73% and 74% relative improvement over the population average weights, respectively. The treatment plans resulting from the LR weights had, on average, a rectum V70Gy that was 35% closer to the clinical plan and a bladder V70Gy that was 29% closer, compared to the population average weights. Similar results were observed for all other clinical metrics. Conclusions: The authors demonstrated that the KNN and MLR

  11. An Objective Verification of the North American Mesoscale Model for Kennedy Space Center and Cape Canaveral Air Force Station

    Science.gov (United States)

    Bauman, William H., III

    2010-01-01

    The 45th Weather Squadron (45 WS) Launch Weather Officers use the 12-km resolution North American Mesoscale (NAM) model (MesoNAM) text and graphical product forecasts extensively to support launch weather operations. However, the actual performance of the model at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) has not been measured objectively. In order to have tangible evidence of model performance, the 45 WS tasked the Applied Meteorology Unit to conduct a detailed statistical analysis of model output compared to observed values. The model products are provided to the 45 WS by ACTA, Inc. and include hourly forecasts from 0 to 84 hours based on model initialization times of 00, 06, 12 and 18 UTC. The objective analysis compared the MesoNAM forecast winds, temperature and dew point, as well as the changes in these parameters over time, to the observed values from the sensors in the KSC/CCAFS wind tower network. Objective statistics will give the forecasters knowledge of the model's strength and weaknesses, which will result in improved forecasts for operations.

  12. Application of a mathematical model for the minimization of costs in a micro-company of the graphic sector

    Directory of Open Access Journals (Sweden)

    Paulo Cesar Chagas Rodrigues

    2017-07-01

    Full Text Available Supply chain management, postponement and demand management are one of the operations of strategic importance for the economic success of organizations, in times of economic crisis or not. The objective of this article is to analyze the influence that a mathematical model focused on the management of raw material stocks in a microenterprise with seasonal demand. The research method adopted was of an applied nature, with a quantitative approach and with an exploratory and descriptive objective. The technical procedures adopted were the bibliographical survey, documentary analysis and mathematical modeling. The development of mathematical models for solving inventory management problems may allow managers to observe deviations in trading methods, as well as to support rapid decisions for possible unforeseen market or economic variability.

  13. Graphics gems II

    CERN Document Server

    Arvo, James

    1991-01-01

    Graphics Gems II is a collection of articles shared by a diverse group of people that reflect ideas and approaches in graphics programming which can benefit other computer graphics programmers.This volume presents techniques for doing well-known graphics operations faster or easier. The book contains chapters devoted to topics on two-dimensional and three-dimensional geometry and algorithms, image processing, frame buffer techniques, and ray tracing techniques. The radiosity approach, matrix techniques, and numerical and programming techniques are likewise discussed.Graphics artists and comput

  14. Objectivity in the non-Markovian spin-boson model

    Science.gov (United States)

    Lampo, Aniello; Tuziemski, Jan; Lewenstein, Maciej; Korbicz, Jarosław K.

    2017-07-01

    Objectivity constitutes one of the main features of the macroscopic classical world. An important aspect of the quantum-to-classical transition issue is to explain how such a property arises from the microscopic quantum theory. Recently, within the framework of open quantum systems, there has been proposed such a mechanism in terms of the so-called spectrum broadcast structures. These are multipartite quantum states of the system of interest and a part of its environment, assumed to be under an observation. This approach requires a departure from the standard open quantum systems methods, as the environment cannot be completely neglected. In the present paper we study the emergence of such a state structure in one of the canonical models of the condensed-matter theory: the spin-boson model, describing the dynamics of a two-level system coupled to an environment made up by a large number of harmonic oscillators. We pay much attention to the behavior of the model in the non-Markovian regime, in order to provide a testbed to analyze how the non-Markovian nature of the evolution affects the surfacing of a spectrum broadcast structure.

  15. Spatial object model[l]ing in fuzzy topological spaces : with applications to land cover change

    NARCIS (Netherlands)

    Tang, Xinming

    2004-01-01

    The central topic of this thesis focuses on the accommodation of fuzzy spatial objects in a GIS. Several issues are discussed theoretically and practically, including the definition of fuzzy spatial objects, the topological relations between them, the modeling of fuzzy spatial objects, the generatio

  16. Spatial Visualization Research and Theories: Their Importance in the Development of an Engineering and Technical Design Graphics Curriculum Model.

    Science.gov (United States)

    Miller, Craig L.; Bertoline, Gary R.

    1991-01-01

    An overview that gives an introduction to the theories, terms, concepts, and prior research conducted on visualization is presented. This information is to be used as a basis for developing spatial research studies that lend support to the theory that the engineering and technical design graphics curriculum is important in the development of…

  17. Multi-Objective Model Checking of Markov Decision Processes

    CERN Document Server

    Etessami, Kousha; Vardi, Moshe Y; Yannakakis, Mihalis

    2008-01-01

    We study and provide efficient algorithms for multi-objective model checking problems for Markov Decision Processes (MDPs). Given an MDP, $M$, and given multiple linear-time ($\\omega$-regular or LTL) properties $\\varphi_i$, and probabilities $r_i \\in [0,1]$, $i=1,...,k$, we ask whether there exists a strategy $\\sigma$ for the controller such that, for all $i$, the probability that a trajectory of $M$ controlled by $\\sigma$ satisfies $\\varphi_i$ is at least $r_i$. We provide an algorithm that decides whether there exists such a strategy and if so produces it, and which runs in time polynomial in the size of the MDP. Such a strategy may require the use of both randomization and memory. We also consider more general multi-objective $\\omega$-regular queries, which we motivate with an application to assume-guarantee compositional reasoning for probabilistic systems. Note that there can be trade-offs between different properties: satisfying property $\\varphi_1$ with high probability may necessitate satisfying $\\var...

  18. Enhanced Graphics for Extended Scale Range

    Science.gov (United States)

    Hanson, Andrew J.; Chi-Wing Fu, Philip

    2012-01-01

    Enhanced Graphics for Extended Scale Range is a computer program for rendering fly-through views of scene models that include visible objects differing in size by large orders of magnitude. An example would be a scene showing a person in a park at night with the moon, stars, and galaxies in the background sky. Prior graphical computer programs exhibit arithmetic and other anomalies when rendering scenes containing objects that differ enormously in scale and distance from the viewer. The present program dynamically repartitions distance scales of objects in a scene during rendering to eliminate almost all such anomalies in a way compatible with implementation in other software and in hardware accelerators. By assigning depth ranges correspond ing to rendering precision requirements, either automatically or under program control, this program spaces out object scales to match the precision requirements of the rendering arithmetic. This action includes an intelligent partition of the depth buffer ranges to avoid known anomalies from this source. The program is written in C++, using OpenGL, GLUT, and GLUI standard libraries, and nVidia GEForce Vertex Shader extensions. The program has been shown to work on several computers running UNIX and Windows operating systems.

  19. Computational Data Modeling for Network-Constrained Moving Objects

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Speicys, L.; Kligys, A.

    2003-01-01

    Advances in wireless communications, positioning technology, and other hardware technologies combine to enable a range of applications that use a mobile user’s geo-spatial data to deliver online, location-enhanced services, often referred to as location-based services. Assuming that the service...... users are constrained to a transportation network, this paper develops data structures that model road networks, the mobile users, and stationary objects of interest. The proposed framework encompasses two supplementary road network representations, namely a two-dimensional representation and a graph...... representation. These capture aspects of the problem domain that are required in order to support the querying that underlies the envisioned location-based services....

  20. The Systems Biology Graphical Notation.

    Science.gov (United States)

    Le Novère, Nicolas; Hucka, Michael; Mi, Huaiyu; Moodie, Stuart; Schreiber, Falk; Sorokin, Anatoly; Demir, Emek; Wegner, Katja; Aladjem, Mirit I; Wimalaratne, Sarala M; Bergman, Frank T; Gauges, Ralph; Ghazal, Peter; Kawaji, Hideya; Li, Lu; Matsuoka, Yukiko; Villéger, Alice; Boyd, Sarah E; Calzone, Laurence; Courtot, Melanie; Dogrusoz, Ugur; Freeman, Tom C; Funahashi, Akira; Ghosh, Samik; Jouraku, Akiya; Kim, Sohyoung; Kolpakov, Fedor; Luna, Augustin; Sahle, Sven; Schmidt, Esther; Watterson, Steven; Wu, Guanming; Goryanin, Igor; Kell, Douglas B; Sander, Chris; Sauro, Herbert; Snoep, Jacky L; Kohn, Kurt; Kitano, Hiroaki

    2009-08-01

    Circuit diagrams and Unified Modeling Language diagrams are just two examples of standard visual languages that help accelerate work by promoting regularity, removing ambiguity and enabling software tool support for communication of complex information. Ironically, despite having one of the highest ratios of graphical to textual information, biology still lacks standard graphical notations. The recent deluge of biological knowledge makes addressing this deficit a pressing concern. Toward this goal, we present the Systems Biology Graphical Notation (SBGN), a visual language developed by a community of biochemists, modelers and computer scientists. SBGN consists of three complementary languages: process diagram, entity relationship diagram and activity flow diagram. Together they enable scientists to represent networks of biochemical interactions in a standard, unambiguous way. We believe that SBGN will foster efficient and accurate representation, visualization, storage, exchange and reuse of information on all kinds of biological knowledge, from gene regulation, to metabolism, to cellular signaling.

  1. Relativistic Hydrodynamics on Graphic Cards

    CERN Document Server

    Gerhard, Jochen; Bleicher, Marcus

    2012-01-01

    We show how to accelerate relativistic hydrodynamics simulations using graphic cards (graphic processing units, GPUs). These improvements are of highest relevance e.g. to the field of high-energetic nucleus-nucleus collisions at RHIC and LHC where (ideal and dissipative) relativistic hydrodynamics is used to calculate the evolution of hot and dense QCD matter. The results reported here are based on the Sharp And Smooth Transport Algorithm (SHASTA), which is employed in many hydrodynamical models and hybrid simulation packages, e.g. the Ultrarelativistic Quantum Molecular Dynamics model (UrQMD). We have redesigned the SHASTA using the OpenCL computing framework to work on accelerators like graphic processing units (GPUs) as well as on multi-core processors. With the redesign of the algorithm the hydrodynamic calculations have been accelerated by a factor 160 allowing for event-by-event calculations and better statistics in hybrid calculations.

  2. DESIGN OF OBJECT-ORIENTED DEBUGGER MODEL BY USING UNIFIED MODELING LANGUAGE

    Directory of Open Access Journals (Sweden)

    Nor Fazlida Mohd Sani

    2013-01-01

    Full Text Available Debugging on computer program is a complex cognitive activity. Although it is complex, it’s still one of the popular issues in computer programming task. It is a difficult task, which is to understand what the error is and how to solve such error? In computer programming the difficulty is to understand the Object-Oriented programming concept together with the programming logic. If the programming logic is incorrect, the program codes will have such error named as logic error and can caused highly maintenance cost. Logic error is a bug in a program that causes it to operate incorrectly, without terminating or crashing the program. It will produce unintended output or other behavior than what we are expecting. Method that use to develop a propose model Object Oriented Debugger is Unified Modeling Language (UML. It is the best choice model and suitable to design the Object Oriented Debugger which will be developed in an object oriented programming environment. The model will provide an ability to capture the characteristics of a system by using notations in the process of designing and implementing the system. The model of Object Oriented Debugger has been successfully implemented. This model has been developed using Unified Approach methodology, which consists of two methods such as Object-Oriented Analysis (OOA and Object-Oriented Design (OOD. The model developed is to capture the structure and behavior of the Object Oriented Debugger by using the UML diagram. The model also can ease the readability of the documentation for the maintenance purposes. The design of the Object Oriented Debugger Model has been developed using the UML notation. It’s consisting of two parts that are object-oriented analysis and object-oriented design. All the developing and designing are based on the model in UML.

  3. An objective evaluation of a segmented foot model.

    Science.gov (United States)

    Okita, Nori; Meyers, Steven A; Challis, John H; Sharkey, Neil A

    2009-07-01

    Segmented foot and ankle models divide the foot into multiple segments in order to obtain more meaningful information about its functional behavior in health and disease. The goal of this research was to objectively evaluate the fidelity of a generalized three-segment foot and ankle model defined using externally mounted markers. An established apparatus that reproduces the kinematics and kinetics of gait in cadaver lower extremities was used to independently examine the validity of the rigid body assumption and the magnitude of soft tissue artifact induced by skin-mounted markers. Stance phase simulations were conducted on ten donated limbs while recording the three-dimensional kinematic trajectories of skin-mounted and then bone-mounted marker constructs. Segment kinematics were compared to underlying bone kinematics to examine the rigid body assumption. Virtual markers were calculated from the bone mounted marker set and then compared to the skin-mounted markers to examine soft tissue artifact. The shank and hindfoot segments behaved as rigid bodies. The forefoot segment violated the rigid body assumption, as evidenced by significant differences between motions of the first metatarsal and the forefoot segment, and relative motion between the first and fifth metatarsals. Motion vectors of the external skin markers relative to their virtual counterparts were no more than 3mm in each direction, and 3-7 mm overall. Artifactual marker motion had mild affects on inter-segmental kinematics. Despite errors, the segmented model appeared to perform reasonably well overall. The data presented here enable more informed interpretations of clinical findings using the segmented model approach.

  4. Mathematics and Computer Graphic Design Arts

    Institute of Scientific and Technical Information of China (English)

    徐亚非

    2001-01-01

    The relationship between arts and mathematics is very close, computer graphic design is based on digital methodology. The paper reveals the mathematical backgrounds behind graphic design by the example of computer-aided cubic modeling and mathematical exchange methodology. Furthermore, one can get incredible artistic effects if computer graphic designers pay more attention to the probability and use probable numbers and fractal operation in their design activities.Finally, the author also discusses the bidirections between arts and mathematics.

  5. MODELING OF CONVECTIVE FLOWS IN PNEUMOBASED OBJECTS. Part 1

    Directory of Open Access Journals (Sweden)

    B. M. Khrustalyov

    2014-01-01

    Full Text Available A computer modeling process of three-dimensional forced convection proceeding from computation of thermodynamic parameters of pneumo basic buildings (pneumo supported structures is presented. The mathematical model of numerical computation method of temperature and velocity fields, pressure profile in the object is developed using the package Solid works and is provided by grid methods on specified software. Special Navier–Stokes, Clapeyron–Mendeleev, continuity and thermal-conductivity equations are used to calculate parameters in the building with four supply and exhaust channels. Differential equations are presented by algebraic equation systems, initial-boundary conditions are changed by differential conditions for mesh functions and their solutions are performed by algebraic operations. In this article the following is demonstrated: in pneumo basic buildings convective and heat flows are identical structures near the surfaces in unlimited space, but in single-multiply shells (envelopescirculation lines take place, geometrical sizes of which depend on thermal-physical characteristics of gas(airin envelopes, radiation reaction with heated surfaces of envelopes with  sphere, earth surface, neighboring buildings. Natural surveys of pneumo-basic buildings of different purposes were carried out in Minsk, in different cities of Belarus and Russia, including temperature fields of external and internal surfaces of air envelopes, relative humidity, thermal (heatflows, radiation characteristics and others.The results of research work are illustrated with diagrams of temperature, velocity, density and pressure dependent on coordinates and time.

  6. A control model for object virtualization in supply chain management

    NARCIS (Netherlands)

    Verdouw, C.N.; Beulens, A.J.M.; Reijers, H.A.; Vorst, van der J.G.A.J.

    2015-01-01

    Due to the emergence of the Internet of Things, supply chain control can increasingly be based on virtual objects instead of on the direct observation of physical objects. Object virtualization allows the decoupling of control activities from the handling and observing of physical products and resou

  7. Enhanced Visual-Attention Model for Perceptually Improved 3D Object Modeling in Virtual Environments

    Science.gov (United States)

    Chagnon-Forget, Maude; Rouhafzay, Ghazal; Cretu, Ana-Maria; Bouchard, Stéphane

    2016-12-01

    Three-dimensional object modeling and interactive virtual environment applications require accurate, but compact object models that ensure real-time rendering capabilities. In this context, the paper proposes a 3D modeling framework employing visual attention characteristics in order to obtain compact models that are more adapted to human visual capabilities. An enhanced computational visual attention model with additional saliency channels, such as curvature, symmetry, contrast and entropy, is initially employed to detect points of interest over the surface of a 3D object. The impact of the use of these supplementary channels is experimentally evaluated. The regions identified as salient by the visual attention model are preserved in a selectively-simplified model obtained using an adapted version of the QSlim algorithm. The resulting model is characterized by a higher density of points in the salient regions, therefore ensuring a higher perceived quality, while at the same time ensuring a less complex and more compact representation for the object. The quality of the resulting models is compared with the performance of other interest point detectors incorporated in a similar manner in the simplification algorithm. The proposed solution results overall in higher quality models, especially at lower resolutions. As an example of application, the selectively-densified models are included in a continuous multiple level of detail (LOD) modeling framework, in which an original neural-network solution selects the appropriate size and resolution of an object.

  8. Objective Evaluation of Sensor Web Modeling and Data System Architectures

    Science.gov (United States)

    Seablom, M. S.; Atlas, R. M.; Ardizzone, J.; Kemp, E. M.; Talabac, S.

    2013-12-01

    We discuss the recent development of an end-to-end simulator designed to quantitatively assess the scientific value of incorporating model- and event-driven "sensor web" capabilities into future NASA Earth Science missions. The intent is to provide an objective analysis tool for performing engineering and scientific trade studies in which new technologies are introduced. In the case study presented here we focus on meteorological applications in which a numerical model is used to intelligently schedule data collection by space-based assets. Sensor web observing systems that enable dynamic targeting by various observing platforms have the potential to significantly improve our ability to monitor, understand, and predict the evolution of rapidly evolving, transient, or variable meteorological events. The use case focuses on landfalling hurricanes and was selected due to the obvious societal impact and the ongoing need to improve warning times. Although hurricane track prediction has improved over the past several decades, further improvement is necessary in the prediction of hurricane intensity. We selected a combination of future observing platforms to apply sensor web measurement techniques: global 3D lidar winds, next-generation scatterometer ocean vector winds, and high resolution cloud motion vectors from GOES-R. Targeting of the assets by a numerical model would allow the spacecraft to change its attitude by performing a roll maneuver to enable off-nadir measurements to be acquired. In this study, synthetic measurements were derived through Observing System Simulation Experiments (OSSEs) and enabled in part through the Dopplar Lidar Simulation Model developed by Simpson Weather Associates. We describe the capabilities of the simulator through three different sensor web configurations of the wind lidar: winds obtained from a nominal "survey mode" operation, winds obtained with a reduced duty cycle of the lidar (designed for preserving the life of the instrument

  9. Handling Emergency Management in [an] Object Oriented Modeling Environment

    Science.gov (United States)

    Tokgoz, Berna Eren; Cakir, Volkan; Gheorghe, Adrian V.

    2010-01-01

    It has been understood that protection of a nation from extreme disasters is a challenging task. Impacts of extreme disasters on a nation's critical infrastructures, economy and society could be devastating. A protection plan itself would not be sufficient when a disaster strikes. Hence, there is a need for a holistic approach to establish more resilient infrastructures to withstand extreme disasters. A resilient infrastructure can be defined as a system or facility that is able to withstand damage, but if affected, can be readily and cost-effectively restored. The key issue to establish resilient infrastructures is to incorporate existing protection plans with comprehensive preparedness actions to respond, recover and restore as quickly as possible, and to minimize extreme disaster impacts. Although national organizations will respond to a disaster, extreme disasters need to be handled mostly by local emergency management departments. Since emergency management departments have to deal with complex systems, they have to have a manageable plan and efficient organizational structures to coordinate all these systems. A strong organizational structure is the key in responding fast before and during disasters, and recovering quickly after disasters. In this study, the entire emergency management is viewed as an enterprise and modelled through enterprise management approach. Managing an enterprise or a large complex system is a very challenging task. It is critical for an enterprise to respond to challenges in a timely manner with quick decision making. This study addresses the problem of handling emergency management at regional level in an object oriented modelling environment developed by use of TopEase software. Emergency Operation Plan of the City of Hampton, Virginia, has been incorporated into TopEase for analysis. The methodology used in this study has been supported by a case study on critical infrastructure resiliency in Hampton Roads.

  10. Intelligent Computer Graphics 2012

    CERN Document Server

    Miaoulis, Georgios

    2013-01-01

    In Computer Graphics, the use of intelligent techniques started more recently than in other research areas. However, during these last two decades, the use of intelligent Computer Graphics techniques is growing up year after year and more and more interesting techniques are presented in this area.   The purpose of this volume is to present current work of the Intelligent Computer Graphics community, a community growing up year after year. This volume is a kind of continuation of the previously published Springer volumes “Artificial Intelligence Techniques for Computer Graphics” (2008), “Intelligent Computer Graphics 2009” (2009), “Intelligent Computer Graphics 2010” (2010) and “Intelligent Computer Graphics 2011” (2011).   Usually, this kind of volume contains, every year, selected extended papers from the corresponding 3IA Conference of the year. However, the current volume is made from directly reviewed and selected papers, submitted for publication in the volume “Intelligent Computer Gr...

  11. Mixed scale joint graphical lasso

    NARCIS (Netherlands)

    Pircalabelu, E.; Claeskens, G.; Waldorp, L.J.

    2016-01-01

    We have developed a method for estimating brain networks from fMRI datasets that have not all been measured using the same set of brain regions. Some of the coarse scale regions have been split in smaller subregions. The proposed penalized estimation procedure selects undirected graphical models wit

  12. Multi-objective compared to single-objective optimization with application to model validation and uncertainty quantification

    Energy Technology Data Exchange (ETDEWEB)

    Schulze-Riegert, R.; Krosche, M.; Stekolschikov, K. [Scandpower Petroleum Technology GmbH, Hamburg (Germany); Fahimuddin, A. [Technische Univ. Braunschweig (Germany)

    2007-09-13

    History Matching in Reservoir Simulation, well location and production optimization etc. is generally a multi-objective optimization problem. The problem statement of history matching for a realistic field case includes many field and well measurements in time and type, e.g. pressure measurements, fluid rates, events such as water and gas break-throughs, etc. Uncertainty parameters modified as part of the history matching process have varying impact on the improvement of the match criteria. Competing match criteria often reduce the likelihood of finding an acceptable history match. It is an engineering challenge in manual history matching processes to identify competing objectives and to implement the changes required in the simulation model. In production optimization or scenario optimization the focus on one key optimization criterion such as NPV limits the identification of alternatives and potential opportunities, since multiple objectives are summarized in a predefined global objective formulation. Previous works primarily focus on a specific optimization method. Few works actually concentrate on the objective formulation and multi-objective optimization schemes have not yet been applied to reservoir simulations. This paper presents a multi-objective optimization approach applicable to reservoir simulation. It addresses the problem of multi-objective criteria in a history matching study and presents analysis techniques identifying competing match criteria. A Pareto-Optimizer is discussed and the implementation of that multi-objective optimization scheme is applied to a case study. Results are compared to a single-objective optimization method. (orig.)

  13. Deterministic Graphical Games Revisited

    DEFF Research Database (Denmark)

    Andersson, Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro

    2008-01-01

    We revisit the deterministic graphical games of Washburn. A deterministic graphical game can be described as a simple stochastic game (a notion due to Anne Condon), except that we allow arbitrary real payoffs but disallow moves of chance. We study the complexity of solving deterministic graphical...... games and obtain an almost-linear time comparison-based algorithm for computing an equilibrium of such a game. The existence of a linear time comparison-based algorithm remains an open problem....

  14. Practical Implementation of a Graphics Turing Test

    DEFF Research Database (Denmark)

    Borg, Mathias; Johansen, Stine Schmieg; Thomsen, Dennis Lundgaard

    2012-01-01

    We present a practical implementation of a variation of the Turing Test for realistic computer graphics. The test determines whether virtual representations of objects appear as real as genuine objects. Two experiments were conducted wherein a real object and a similar virtual object is presented...... graphics. Based on the results from these experiments, future versions of the Graphics Turing Test could ease the restrictions currently necessary in order to test object telepresence under more general conditions. Furthermore, the test could be used to determine the minimum requirements to achieve object...... to test subjects under specific restrictions. A criterion for passing the test is presented based on the probability for the subjects to be unable to recognise a computer generated object as virtual. The experiments show that the specific setup can be used to determine the quality of virtual reality...

  15. The computer graphics metafile

    CERN Document Server

    Henderson, LR; Shepherd, B; Arnold, D B

    1990-01-01

    The Computer Graphics Metafile deals with the Computer Graphics Metafile (CGM) standard and covers topics ranging from the structure and contents of a metafile to CGM functionality, metafile elements, and real-world applications of CGM. Binary Encoding, Character Encoding, application profiles, and implementations are also discussed. This book is comprised of 18 chapters divided into five sections and begins with an overview of the CGM standard and how it can meet some of the requirements for storage of graphical data within a graphics system or application environment. The reader is then intr

  16. The computer graphics interface

    CERN Document Server

    Steinbrugge Chauveau, Karla; Niles Reed, Theodore; Shepherd, B

    2014-01-01

    The Computer Graphics Interface provides a concise discussion of computer graphics interface (CGI) standards. The title is comprised of seven chapters that cover the concepts of the CGI standard. Figures and examples are also included. The first chapter provides a general overview of CGI; this chapter covers graphics standards, functional specifications, and syntactic interfaces. Next, the book discusses the basic concepts of CGI, such as inquiry, profiles, and registration. The third chapter covers the CGI concepts and functions, while the fourth chapter deals with the concept of graphic obje

  17. Evolutionary optimization of a hierarchical object recognition model.

    Science.gov (United States)

    Schneider, Georg; Wersing, Heiko; Sendhoff, Bernhard; Körner, Edgar

    2005-06-01

    A major problem in designing artificial neural networks is the proper choice of the network architecture. Especially for vision networks classifying three-dimensional (3-D) objects this problem is very challenging, as these networks are necessarily large and therefore the search space for defining the needed networks is of a very high dimensionality. This strongly increases the chances of obtaining only suboptimal structures from standard optimization algorithms. We tackle this problem in two ways. First, we use biologically inspired hierarchical vision models to narrow the space of possible architectures and to reduce the dimensionality of the search space. Second, we employ evolutionary optimization techniques to determine optimal features and nonlinearities of the visual hierarchy. Here, we especially focus on higher order complex features in higher hierarchical stages. We compare two different approaches to perform an evolutionary optimization of these features. In the first setting, we directly code the features into the genome. In the second setting, in analogy to an ontogenetical development process, we suggest the new method of an indirect coding of the features via an unsupervised learning process, which is embedded into the evolutionary optimization. In both cases the processing nonlinearities are encoded directly into the genome and are thus subject to optimization. The fitness of the individuals for the evolutionary selection process is computed by measuring the network classification performance on a benchmark image database. Here, we use a nearest-neighbor classification approach, based on the hierarchical feature output. We compare the found solutions with respect to their ability to generalize. We differentiate between a first- and a second-order generalization. The first-order generalization denotes how well the vision system, after evolutionary optimization of the features and nonlinearities using a database A, can classify previously unseen test

  18. Synthesising Graphical Theories

    CERN Document Server

    Kissinger, Aleks

    2012-01-01

    In recent years, diagrammatic languages have been shown to be a powerful and expressive tool for reasoning about physical, logical, and semantic processes represented as morphisms in a monoidal category. In particular, categorical quantum mechanics, or "Quantum Picturalism", aims to turn concrete features of quantum theory into abstract structural properties, expressed in the form of diagrammatic identities. One way we search for these properties is to start with a concrete model (e.g. a set of linear maps or finite relations) and start composing generators into diagrams and looking for graphical identities. Naively, we could automate this procedure by enumerating all diagrams up to a given size and check for equalities, but this is intractable in practice because it produces far too many equations. Luckily, many of these identities are not primitive, but rather derivable from simpler ones. In 2010, Johansson, Dixon, and Bundy developed a technique called conjecture synthesis for automatically generating conj...

  19. Modeling of equilibrium hollow objects stabilized by electrostatics.

    Science.gov (United States)

    Mani, Ethayaraja; Groenewold, Jan; Kegel, Willem K

    2011-05-18

    The equilibrium size of two largely different kinds of hollow objects behave qualitatively differently with respect to certain experimental conditions. Yet, we show that they can be described within the same theoretical framework. The objects we consider are 'minivesicles' of ionic and nonionic surfactant mixtures, and shells of Keplerate-type polyoxometalates. The finite-size of the objects in both systems is manifested by electrostatic interactions. We emphasize the importance of constant charge and constant potential boundary conditions. Taking these conditions into account, indeed, leads to the experimentally observed qualitatively different behavior of the equilibrium size of the objects.

  20. Modeling of equilibrium hollow objects stabilized by electrostatics

    Energy Technology Data Exchange (ETDEWEB)

    Mani, Ethayaraja; Groenewold, Jan; Kegel, Willem K, E-mail: w.k.kegel@uu.nl [Van' t Hoff Laboratory for Physical and Colloid Chemistry, Debye Institute, Utrecht University, Padualaan 8, 3584 CH Utrecht (Netherlands)

    2011-05-18

    The equilibrium size of two largely different kinds of hollow objects behave qualitatively differently with respect to certain experimental conditions. Yet, we show that they can be described within the same theoretical framework. The objects we consider are 'minivesicles' of ionic and nonionic surfactant mixtures, and shells of Keplerate-type polyoxometalates. The finite-size of the objects in both systems is manifested by electrostatic interactions. We emphasize the importance of constant charge and constant potential boundary conditions. Taking these conditions into account, indeed, leads to the experimentally observed qualitatively different behavior of the equilibrium size of the objects.

  1. HL7 template model and EN/ISO 13606 archetype object model - a comparison.

    Science.gov (United States)

    Bointner, Karl; Duftschmid, Georg

    2009-01-01

    HL7 Templates and EN/ISO 13606 Archetypes are essential components for a semantically interoperable exchange of electronic health record (EHR) data. In this article the underlying models from which Templates and Archetypes are instantiated, namely the HL7 Template Model and the EN/ISO 13606 Archetype Object Model will be compared to identify discrepancies and analogies.

  2. The CTQ flowdown as a conceptual model of project objectives

    NARCIS (Netherlands)

    H. de Koning; J. de Mast

    2007-01-01

    The purpose of this article is to describe and clarify a tool that is at the core of the definition phase of most quality improvement projects. This tool is called the critical to quality (CTQ) flowdown. It relates high-level strategic focal points to project objectives. In their turn project object

  3. Selecting personnel to work on the interactive graphics system

    Energy Technology Data Exchange (ETDEWEB)

    Norton, F.J.

    1979-11-30

    The paper established criteria for the selection of personnel to work on the interactive graphics system and mentions some of human behavioral patterns that are created by the implementation of graphic systems. Some of the social and educational problems associated with the interactive graphics system will be discussed. The project also provided for collecting objective data which would be useful in assessing the benefits of interactive graphics systems.

  4. Graphic Notation in Music Therapy: A Discussion of What to Notate in Graphic Notation and How

    DEFF Research Database (Denmark)

    Bergstrøm-Nielsen, Carl

    2009-01-01

    This article presents graphic notations of music and related forms of communication in music therapy contexts, created by different authors and practitioners. Their purposes, objects of description, and the elements of graphic language are reflected upon in a comparative discussion. From...... are also important concerns. Among the authors discussed, there is a large variety both in goals and methods. Keywords are proposed to circumscribe moments of possible interest connected to graphic notations. I suggest that the discipline of graphic notation can be useful for the grounding of music therapy...... presentation and research in empirical, clinical musical reality, and welcome further discussion and explorative work....

  5. Model Object Relational Database pada Aplikasi Notifikasi SMS

    Directory of Open Access Journals (Sweden)

    Indrajani

    2013-05-01

    Full Text Available The purpose of this research is to analyse the database using object oriented approach. In addition, this research develops and designs an object-relational database structure to make easier the process of searching for the required information. The benefit of this research is to provide a data structure that can be reused for SMS notification application developer team, so that the developers can use existing objects for future use if there are changes in the current business process. The methods used are data collection, analysis, and design. The result obtained from this research is a database structure of SMS notification application, which can be used in various programming languages for using object-relational database approach. For conclusion, by using the object-relational database structure, application developers will be made easier in developing the current database structure. Programmer can add a new data type simply by creating a new object. Data is accessed or processed as object-oriented.

  6. Defining Dynamic Graphics by a Graphical Language

    Institute of Scientific and Technical Information of China (English)

    毛其昌; 戴汝为

    1991-01-01

    A graphical language which can be used for defining dynamic picture and applying control actions to it is defined with an expanded attributed grammar.Based on this a system is built for developing the presentation of application data of user interface.This system provides user interface designers with a friendly and high efficient programming environment.

  7. THE IDEA OF CREATING DOMESTIC SOFTWARE COMPLEX OF DIGITAL GRAPHIC MODELING IN THE CONTEXT OF IMPORT SUBSTITUTION STRATEGY

    Directory of Open Access Journals (Sweden)

    V. V. MIROSHNIKOV

    2015-01-01

    Full Text Available The article focuses on the problem of import substitution in the field of IT-technologies, in particular, the practice of visual modeling in the process of designing the objects of architecture and design. The author presents the idea of creating national multifunctional software complex for visual modeling as a topical alternative to foreign analogues. The topicality of the ideas is determined by several factors: the increased risks for the technological sovereignty of the country, the need to optimize the practice of designing the objects of architecture and design, rapid development of domestic industry of IT-technologies. The article describes in detail the proposal for the creation of software based on the latest IT- technologies, maximum optimized in terms of interoperability through its constituent units, taking into consideration the eculiarities of mentality and language culture and traditions of local users and the specifics of the algorithms of design process. The idea set forth in the article aims to familiarize practitioners of design and architecture with options for addressing the urgent, in the author‟s opinion, complex problem in the field of digital modeling in various areas of design practice and information exchange. The author‟s suggestions are based on his own practical experience of project modeling and teaching professional designers.

  8. Working memory contributes to the encoding of object location associations: Support for a 3-part model of object location memory.

    Science.gov (United States)

    Gillis, M Meredith; Garcia, Sarah; Hampstead, Benjamin M

    2016-09-15

    A recent model by Postma and colleagues posits that the encoding of object location associations (OLAs) requires the coordination of several cognitive processes mediated by ventral (object perception) and dorsal (spatial perception) visual pathways as well as the hippocampus (feature binding) [1]. Within this model, frontoparietal network recruitment is believed to contribute to both the spatial processing and working memory task demands. The current study used functional magnetic resonance imaging (fMRI) to test each step of this model in 15 participants who encoded OLAs and performed standard n-back tasks. As expected, object processing resulted in activation of the ventral visual stream. Object in location processing resulted in activation of both the ventral and dorsal visual streams as well as a lateral frontoparietal network. This condition was also the only one to result in medial temporal lobe activation, supporting its role in associative learning. A conjunction analysis revealed areas of shared activation between the working memory and object in location phase within the lateral frontoparietal network, anterior insula, and basal ganglia; consistent with prior working memory literature. Overall, findings support Postma and colleague's model and provide clear evidence for the role of working memory during OLA encoding.

  9. A Multi-objective Model for Transmission Planning Under Uncertainties

    DEFF Research Database (Denmark)

    Zhang, Chunyu; Wang, Qi; Ding, Yi;

    2014-01-01

    The significant growth of distributed energy resources (DERs) associated with smart grid technologies has prompted excessive uncertainties in the transmission system. The most representative is the novel notation of commercial aggregator who has lighted a bright way for DERs to participate power...... trading and regulating in transmission level. In this paper, the aggregator caused uncertainty is analyzed first considering DERs’ correlation. During the transmission planning, a scenario-based multi-objective transmission planning (MOTP) framework is proposed to simultaneously optimize two objectives, i.......e. the cost of power purchase and network expansion, and the revenue of power delivery. A two-phase multi-objective PSO (MOPSO) algorithm is employed to be the solver. The feasibility of the proposed multi-objective planning approach has been verified by the 77-bus system linked with 38-bus distribution...

  10. Texture Models Based on Probabilistic Graphical Models%基于概率图模型的图像纹理模型

    Institute of Scientific and Technical Information of China (English)

    杨关; 冯国灿; 陈伟福; 罗志宏

    2011-01-01

    Texture is one of the visual features playing an important role in image analysis.Many applications have been discovered using texture models.Probabilistic graphical models Science, are promising tools for constructing texture models.The problem of learning the structure of GGM for texture classification is addressed.GGM are characterized by a neighborhood, a set of parameters, and a noise sequence due to the connection between the local Markov property and conditional regression of a Gaussian random variable.By use of the methods of model selection to choose an appropriate neighborhood and estimate the unknown parameters for modeling GGM, neighborhood selection and parameter estimation are conducted simultaneously.And then new texture features based on GGM for texture synthesis and texture classification are extracted.Experimental results show that adaptive Lasso estimators are more effective.%纹理作为一种视觉特征,它广泛应用于图像分析.概率图模型由于其自身特点可以很好地描述纹理.高斯图模型结构可根据局部马尔科夫性和高斯变量的条件回归之间的关系来学习.高斯图模型可用一个邻域系统、一个参数集和一个噪声序列表示.利用惩罚正则化方法,可以选择高斯图模型的邻域并估计参数,然后提取纹理特征进行纹理合成和分类.实验结果显示基于高斯图模型的纹理特征更加有效.

  11. Fast point-based method of a computer-generated hologram for a triangle-patch model by using a graphics processing unit.

    Science.gov (United States)

    Sugawara, Takuya; Ogihara, Yuki; Sakamoto, Yuji

    2016-01-20

    The point-based method and fast-Fourier-transform-based method are commonly used for calculation methods of computer-generation holograms. This paper proposes a novel fast calculation method for a patch model, which uses the point-based method. The method provides a calculation time that is proportional to the number of patches but not to that of the point light sources. This means that the method is suitable for calculating a wide area covered by patches quickly. Experiments using a graphics processing unit indicated that the proposed method is about 8 times or more faster than the ordinary point-based method.

  12. Reducing the item number to obtain the same-length self-assessment scales: a systematic approach using result of graphical loglinear rasch models

    DEFF Research Database (Denmark)

    Nielsen, Tine; Kreiner, Svend

    2011-01-01

    approach to item reduction based on results of graphical loglinear Rasch modeling (GLLRM) was designed. This approach was then used to reduce the number of items in the subscales of the R-D-LSI which had an item-length of more than seven items, thereby obtaining the Danish Self-Assessment Learning Styles......The Revised Danish Learning Styles Inventory (R-D-LSI) (Nielsen 2005), which is an adaptation of Sternberg- Wagner Thinking Styles Inventory (Sternberg, 1997), comprises 14 subscales, each measuring a separate learning style. Of these 14 subscales, 9 are eight items long and 5 are seven items long...

  13. Improvements in recall and food choices using a graphical method to deliver information of select nutrients.

    Science.gov (United States)

    Pratt, Nathan S; Ellison, Brenna D; Benjamin, Aaron S; Nakamura, Manabu T

    2016-01-01

    Consumers have difficulty using nutrition information. We hypothesized that graphically delivering information of select nutrients relative to a target would allow individuals to process information in time-constrained settings more effectively than numerical information. Objectives of the study were to determine the efficacy of the graphical method in (1) improving memory of nutrient information and (2) improving consumer purchasing behavior in a restaurant. Values of fiber and protein per calorie were 2-dimensionally plotted alongside a target box. First, a randomized cued recall experiment was conducted (n=63). Recall accuracy of nutrition information improved by up to 43% when shown graphically instead of numerically. Second, the impact of graphical nutrition signposting on diner choices was tested in a cafeteria. Saturated fat and sodium information was also presented using color coding. Nutrient content of meals (n=362) was compared between 3 signposting phases: graphical, nutrition facts panels (NFP), or no nutrition label. Graphical signposting improved nutrient content of purchases in the intended direction, whereas NFP had no effect compared with the baseline. Calories ordered from total meals, entrées, and sides were significantly less during graphical signposting than no-label and NFP periods. For total meal and entrées, protein per calorie purchased was significantly higher and saturated fat significantly lower during graphical signposting than the other phases. Graphical signposting remained a predictor of calories and protein per calorie purchased in regression modeling. These findings demonstrate that graphically presenting nutrition information makes that information more available for decision making and influences behavior change in a realistic setting.

  14. Modelling object typicality in description logics - [Workshop on Description Logics

    CSIR Research Space (South Africa)

    Britz, K

    2009-07-01

    Full Text Available The authors presents a semantic model of typicality of concept members in description logics that accords well with a binary, globalist cognitive model of class membership and typicality. The authors define a general preferential semantic framework...

  15. MoldaNet: a network distributed molecular graphics and modelling program that integrates secure signed applet and Java 3D technologies.

    Science.gov (United States)

    Yoshida, H; Rzepa, H S; Tonge, A P

    1998-06-01

    MoldaNet is a molecular graphics and modelling program that integrates several new Java technologies, including authentication as a Secure Signed Applet, and implementation of Java 3D classes to enable access to hardware graphics acceleration. It is the first example of a novel class of Internet-based distributed computational chemistry tool designed to eliminate the need for user pre-installation of software on their client computer other than a standard Internet browser. The creation of a properly authenticated tool using a signed digital X.509 certificate permits the user to employ MoldaNet to read and write the files to a local file store; actions that are normally disallowed in Java applets. The modularity of the Java language also allows straightforward inclusion of Java3D and Chemical Markup Language classes in MoldaNet to permit the user to filter their model into 3D model descriptors such as VRML97 or CML for saving on local disk. The implications for both distance-based training environments and chemical commerce are noted.

  16. Chemists’ knowledge object. Formulation, modification and abandonment of iconic model

    Directory of Open Access Journals (Sweden)

    Rómulo Gallego Badillo

    2006-12-01

    Full Text Available This article presents an analysis of different perspectives in regards to chemistry scientific statute. The category of scientific model was considered to characterize the proposal and development of technological-iconic model. It was necessary to have a look at the time in which the introduction of analogical and symbolic models was indispensable to modify the initial model. It also established the way in which the technological-iconic model can be a didactic foundation to lead secondary students towards Chemistry as one of the natural sciences.

  17. Sharable Courseware Object Reference Model (SCORM), Version 1.0

    Science.gov (United States)

    2000-07-01

    for developing such courseware objects are within the state-of-the- art , but they must be articulated, accepted, and widely used as guidelines by...tudiant apprend par le faire-conditioning ses propres rat-et éprouve beaucoup d’avantages l’expérimentation animale de mais aucune des inconv

  18. Multiple Shape Models for Simultaneous Object Classification and Segmentation

    Science.gov (United States)

    2009-02-01

    SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18. NUMBER OF PAGES 5 19a. NAME OF RESPONSIBLE PERSON a. REPORT...prior segmentation of multiple objects with graph cuts,” in CVPR, 2008, pp. 1–8. [7] D. Cremers, T. Kohlberger , and C. Schnorr, “Shape statistics in

  19. Communication and perception of uncertainty via graphics in disciplinary and interdisciplinary climate change research

    Science.gov (United States)

    Lackner, Bettina C.; Kirchengast, Gottfried

    2015-04-01

    Besides written and spoken language, graphical displays play an important role in communicating scientific findings or explaining scientific methods, both within one and between various disciplines. Uncertainties and probabilities are generally difficult to communicate, especially via graphics. Graphics including uncertainty sometimes need detailed written or oral descriptions to be understood. "Good" graphics should ease scientific communication, especially amongst different disciplines. One key objective of the Doctoral Programme "Climate Change: Uncertainties, Thresholds and Coping Strategies" (http://dk-climate-change.uni-graz.at/en/), located at the University of Graz, is to reach a better understanding of climate change uncertainties by bridging research in multiple disciplines, including physical climate sciences, geosciences, systems and sustainability sciences, environmental economics, and climate ethics. This asks for efforts into the formulation of a "common language", not only as to words, but also as to graphics. The focus of this work is on two topics: (1) What different kinds of uncertainties (e.g., data uncertainty, model uncertainty) are included in the graphics of the recent IPCC reports of all three working groups (WGs) and in what ways do uncertainties get illustrated? (2) How are these graphically displayed uncertainties perceived by researchers of a similar research discipline and from researchers of different disciplines than the authors of the graphics? To answer the first question, the IPCC graphics including uncertainties are grouped and analyzed with respect to different kinds of uncertainties to filter out most of the commonly used types of displays. The graphics will also be analyzed with respect to their WG origin, as we assume that graphics from researchers rooted in, e.g., physical climate sciences and geosciences (mainly IPCC WG 1) differ from those of researchers rooted in, e.g., economics or system sciences (mainly WG 3). In a

  20. Modeling 3D Objects for Navigation Purposes Using Laser Scanning

    Directory of Open Access Journals (Sweden)

    Cezary Specht

    2016-07-01

    Full Text Available The paper discusses the creation of 3d models and their applications in navigation. It contains a review of available methods and geometric data sources, focusing mostly on terrestrial laser scanning. It presents detailed description, from field survey to numerical elaboration, how to construct accurate model of a typical few storey building as a hypothetical reference in complex building navigation. Hence, the paper presents fields where 3d models are being used and their potential new applications.

  1. Practical Implementation of a Graphics Turing Test

    OpenAIRE

    Borg, Mathias; Johansen, Stine Schmieg; Thomsen, Dennis Lundgaard; Kraus, Martin

    2012-01-01

    We present a practical implementation of a variation of the Turing Test for realistic computer graphics. The test determines whether virtual representations of objects appear as real as genuine objects. Two experiments were conducted wherein a real object and a similar virtual object is presented to test subjects under specific restrictions. A criterion for passing the test is presented based on the probability for the subjects to be unable to recognise a computer generated object as virtual....

  2. A Model of Object-Identities and Values

    Science.gov (United States)

    1990-02-23

    formulas. The C-class is similar to class in the usual object-oriented lauguages , such as Smalltalk, and CLOS [WT 89]. A C-class is a combination of...existing one. Recursive Aggregation The induced operator for a recursive aggregation is obtained by inductive limit of gen- erated instances. More...precisoly, we first define an inflational operator to produce new instances. ’lhon% w, take the limit of uccessive applications of the operator. Let G. 1, t

  3. A Conceptual Approach to Object-Oriented Data Modeling

    Science.gov (United States)

    1994-09-01

    staffing, and number of cesarean sections performed per day. A pediatric ward could keep a record of average patient age and total incidence of bone...Finally, two private methods are used by the class object to update the minimum and maximum salary each time a new employee is added to the department... SECTION (AD) 1514 SINGLE PULSE PROCESSING (AE) 1515 MULTIPLE PULSE PROCESSING (AF) 1516 DISPLAYANDICATOR (AG) 15 RPA TREE 152 E1 I CAPABILITIES (AH) 153

  4. Multi-Objective Calibrationo of Hydrologic Model Using Satellite Data

    Science.gov (United States)

    Hydrologic modeling often involves a large number of parameters, some of which cannot be measured directly and may vary with land cover, soil or even seasons. Therefore parameter estimation is a critical step in applying a hydrologic model to any study area. Parameter estimation is typically done by...

  5. Validation of a multi-objective, predictive urban traffic model

    NARCIS (Netherlands)

    Wilmink, I.R.; Haak, P. van den; Woldeab, Z.; Vreeswijk, J.

    2013-01-01

    This paper describes the results of the verification and validation of the ecoStrategic Model, which was developed, implemented and tested in the eCoMove project. The model uses real-time and historical traffic information to determine the current, predicted and desired state of traffic in a network

  6. Indian Graphic Symbols.

    Science.gov (United States)

    Stump, Sarain

    1979-01-01

    Noting Indian tribes had invented ways to record facts and ideas, with graphic symbols that sometimes reached the complexity of hieroglyphs, this article illustrates and describes Indian symbols. (Author/RTS)

  7. Digital Raster Graphics

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — A digital raster graphic (DRG) is a scanned image of a U.S. Geological Survey (USGS) topographic map. The scanned image includes all map collar information. The...

  8. Modeling Real Objects for Kansei-based Shape Retrieval

    Institute of Scientific and Technical Information of China (English)

    Yukihiro Koda; Ichi Kanaya; Kosuke Sato

    2007-01-01

    A large number of 3D models are created on computers and available for networks. Some content-based retrieval technologies are indispensable to find out particular data from such anonymous datasets. Though several shape retrieval technologies have been developed, little attention has been given to the points on human's sense and impression (as known as Kansei) in the conventional techniques. In this paper, the authors propose a novel method of shape retrieval based on shape impression of human's Kansei. The key to the method is the Gaussian curvature distribution from 3D models as features for shape retrieval. Then it classifies the 3D models by extracted feature and measures similarity among models in storage.

  9. Proposal for a new CAPE-OPEN Object Model

    Science.gov (United States)

    Process simulation applications require the exchange of significant amounts of data between the flowsheet environment, unit operation model, and thermodynamic server. Packing and unpacking various data types and exchanging data using structured text-based architectures, including...

  10. A tri-objective, dynamic weapon assignment model for surface ...

    African Journals Online (AJOL)

    2015-05-11

    May 11, 2015 ... metaheuristic for solving the vehicle routing problem with time .... by Flood [10] and formulated as a mathematical model by Manne [21] later that year. ...... another in the indifference grid method, and hence, both solutions will ...

  11. Objective models for steroid binding sites of human globulins

    Science.gov (United States)

    Schnitker, Jurgen; Gopalaswamy, Ramesh; Crippen, Gordon M.

    1997-01-01

    We report the application of a recently developed alignment-free 3D QSAR method [Crippen,G.M., J. Comput. Chem., 16 (1995) 486] to a benchmark-type problem. The test systeminvolves the binding of 31 steroid compounds to two kinds of human carrier protein. Themethod used not only allows for arbitrary binding modes, but also avoids the problems oftraditional least-squares techniques with regard to the implicit neglect of informative outlyingdata points. It is seen that models of considerable predictive power can be obtained even witha very vague binding site description. Underlining a systematic, but usually ignored, problemof the QSAR approach, there is not one unique type of model but, rather, an entire manifoldof distinctly different models that are all compatible with the experimental information. Fora given model, there is also a considerable variation in the found binding modes, illustratingthe problems that are inherent in the need for 'correct` molecular alignment in conventional3D QSAR methods.

  12. Comparison of Three Approximate Kinematic Models for Space Object Tracking

    Science.gov (United States)

    2013-07-01

    surveillance, and the SOs are only observed during a very small fraction of their orbiting period (with the short-arc observations) [10,20]. State...initialization, the 2-point differencing method in [1] is used for tracking filters with the WNA and KSP models, since their state vector contain only positions...and velocities. For the WPA model, a modified 2-point differencing method was used, where the position and velocity states and their covariance are

  13. Costs of predator-induced phenotypic plasticity: a graphical model for predicting the contribution of nonconsumptive and consumptive effects of predators on prey.

    Science.gov (United States)

    Peacor, Scott D; Peckarsky, Barbara L; Trussell, Geoffrey C; Vonesh, James R

    2013-01-01

    Defensive modifications in prey traits that reduce predation risk can also have negative effects on prey fitness. Such nonconsumptive effects (NCEs) of predators are common, often quite strong, and can even dominate the net effect of predators. We develop an intuitive graphical model to identify and explore the conditions promoting strong NCEs. The model illustrates two conditions necessary and sufficient for large NCEs: (1) trait change has a large cost, and (2) the benefit of reduced predation outweighs the costs, such as reduced growth rate. A corollary condition is that potential predation in the absence of trait change must be large. In fact, the sum total of the consumptive effects (CEs) and NCEs may be any value bounded by the magnitude of the predation rate in the absence of the trait change. The model further illustrates how, depending on the effect of increased trait change on resulting costs and benefits, any combination of strong and weak NCEs and CEs is possible. The model can also be used to examine how changes in environmental factors (e.g., refuge safety) or variation among predator-prey systems (e.g., different benefits of a prey trait change) affect NCEs. Results indicate that simple rules of thumb may not apply; factors that increase the cost of trait change or that increase the degree to which an animal changes a trait, can actually cause smaller (rather than larger) NCEs. We provide examples of how this graphical model can provide important insights for empirical studies from two natural systems. Implementation of this approach will improve our understanding of how and when NCEs are expected to dominate the total effect of predators. Further, application of the models will likely promote a better linkage between experimental and theoretical studies of NCEs, and foster synthesis across systems.

  14. Graphical symbol recognition

    OpenAIRE

    K.C., Santosh; Wendling, Laurent

    2015-01-01

    International audience; The chapter focuses on one of the key issues in document image processing i.e., graphical symbol recognition. Graphical symbol recognition is a sub-field of a larger research domain: pattern recognition. The chapter covers several approaches (i.e., statistical, structural and syntactic) and specially designed symbol recognition techniques inspired by real-world industrial problems. It, in general, contains research problems, state-of-the-art methods that convey basic s...

  15. Model-based beam control for illumination of remote objects

    Science.gov (United States)

    Chandler, Susan M.; Lukesh, Gordon W.; Voelz, David; Basu, Santasri; Sjogren, Jon A.

    2004-11-01

    On September 1, 2003, Nukove Scientific Consulting, together with partner New Mexico State University, began work on a Phase 1 Small Business Technology TRansfer (STTR) grant from the United States Air Force Office of Scientific Research (AFOSR). The purpose of the grant was to show the feasibility of taking Nukove's pointing estimation technique from a post-processing tool for estimation of laser system characteristics to a real-time tool usable in the field. Nukove's techniques for pointing, shape, and OCS estimation do not require an imaging sensor nor a target board, thus estimates may be made very quickly. To prove feasibility, Nukove developed an analysis tool RHINO (Real-time Histogram Interpretation of Numerical Observations) and successfully demonstrated the emulation of real-time, frame-by-frame estimation of laser system characteristics, with data streamed into the tool and the estimates displayed as they are made. The eventual objective will be to use the frame-by-frame estimates to allow for feedback to a fielded system. Closely associated with this, NMSU developed a laboratory testbed to illuminate test objects, collect the received photons, and stream the data into RHINO. The two coupled efforts clearly demonstrate the feasibility of real-time pointing control of a laser system.

  16. OBJECT ORIENTED MODELLING, A MODELLING METHOD OF AN ECONOMIC ORGANIZATION ACTIVITY

    Directory of Open Access Journals (Sweden)

    TĂNĂSESCU ANA

    2014-05-01

    Full Text Available Now, most economic organizations use different information systems types in order to facilitate their activity. There are different methodologies, methods and techniques that can be used to design information systems. In this paper, I propose to present the advantages of using the object oriented modelling at the information system design of an economic organization. Thus, I have modelled the activity of a photo studio, using Visual Paradigm for UML as a modelling tool. For this purpose, I have identified the use cases for the analyzed system and I have presented the use case diagram. I have, also, realized the system static and dynamic modelling, through the most known UML diagrams.

  17. Examining of the Collision Breakup Model between Geostationary Orbit Objects

    Science.gov (United States)

    Hata, Hidehiro; Hanada, Toshiya; Akahoshi, Yasuhiro; Yasaka, Tetsuo; Harada, Shoji

    This paper will examine the applicability of the hypervelocity collision model included in the NASA standard breakup model 2000 revision to low-velocity collisions possible in space, especially in the geosynchronous regime. The analytic method used in the standard breakup model will be applied to experimental data accumulated through low-velocity impact experiments performed at Kyushu Institute of Technology at a velocity about 300m/s and 800m/s. The projectiles and target specimens used were aluminum solid balls and aluminum honeycomb sandwich panels with face sheets of carbon fiber reinforced plastic, respectively. Then, we have found that a kind of lower boundary exists on fragment area-to-mass distribution at a smaller characteristic length range. This paper will describe the theoretical derivation of lower boundary and propose another modification on fragment area-to-mass distribution and it will conclude that the hypervelocity collision model in the standard breakup model can be applied to low-velocity collisions possible with some modifications.

  18. Jeddah Historical Building Information Modelling "JHBIM" - Object Library

    Science.gov (United States)

    Baik, A.; Alitany, A.; Boehm, J.; Robson, S.

    2014-05-01

    The theory of using Building Information Modelling "BIM" has been used in several Heritage places in the worldwide, in the case of conserving, documenting, managing, and creating full engineering drawings and information. However, one of the most serious issues that facing many experts in order to use the Historical Building Information Modelling "HBIM", is creating the complicated architectural elements of these Historical buildings. In fact, many of these outstanding architectural elements have been designed and created in the site to fit the exact location. Similarly, this issue has been faced the experts in Old Jeddah in order to use the BIM method for Old Jeddah historical Building. Moreover, The Saudi Arabian City has a long history as it contains large number of historic houses and buildings that were built since the 16th century. Furthermore, the BIM model of the historical building in Old Jeddah always take a lot of time, due to the unique of Hijazi architectural elements and no such elements library, which have been took a lot of time to be modelled. This paper will focus on building the Hijazi architectural elements library based on laser scanner and image survey data. This solution will reduce the time to complete the HBIM model and offering in depth and rich digital architectural elements library to be used in any heritage projects in Al-Balad district, Jeddah City.

  19. Principles of crop modelling and simulation: II. the implications of the objective in model development

    Directory of Open Access Journals (Sweden)

    Dourado-Neto D.

    1998-01-01

    Full Text Available With the purpose of presenting to scientists the implications of the objective in model development and a basic vision of modeling, with its potential applications and limitations in agriculture, an integration of crop modeling professionals with agricultural professionals is suggested. Models mean modernization of the information, of the measurement process and of an efficient way to learn more about complex systems. They are one of the best mechanisms of transforming information in useful knowledge and of transferring this knowledge to others. One of the problems that impede a larger progress in modeling is the lack of communication between modelers and a frequent appearance of modelers without a global vision of reality.

  20. Content-Based Search on a Database of Geometric Models: Identifying Objects of Similar Shape

    Energy Technology Data Exchange (ETDEWEB)

    XAVIER, PATRICK G.; HENRY, TYSON R.; LAFARGE, ROBERT A.; MEIRANS, LILITA; RAY, LAWRENCE P.

    2001-11-01

    The Geometric Search Engine is a software system for storing and searching a database of geometric models. The database maybe searched for modeled objects similar in shape to a target model supplied by the user. The database models are generally from CAD models while the target model may be either a CAD model or a model generated from range data collected from a physical object. This document describes key generation, database layout, and search of the database.