Goel, Narendra S.; Rozehnal, Ivan; Thompson, Richard L.
1991-01-01
A computer-graphics-based model, named DIANA, is presented for generation of objects of arbitrary shape and for calculating bidirectional reflectances and scattering from them, in the visible and infrared region. The computer generation is based on a modified Lindenmayer system approach which makes it possible to generate objects of arbitrary shapes and to simulate their growth, dynamics, and movement. Rendering techniques are used to display an object on a computer screen with appropriate shading and shadowing and to calculate the scattering and reflectance from the object. The technique is illustrated with scattering from canopies of simulated corn plants.
Identification of computer graphics objects
Directory of Open Access Journals (Sweden)
Rossinskyi Yu.M.
2016-04-01
Full Text Available The article is devoted to the use of computer graphics methods in problems of creating drawings, charts, drafting, etc. The widespread use of these methods requires the development of efficient algorithms for the identification of objects of drawings. The article analyzes the model-making algorithms for this problem and considered the possibility of reducing the time using graphics editing operations. Editing results in such operations as copying, moving and deleting objects specified images. These operations allow the use of a reliable identification of images of objects methods. For information on the composition of the image of the object along with information about the identity and the color should include information about the spatial location and other characteristics of the object (the thickness and style of contour lines, fill style, and so on. In order to enable the pixel image analysis to structure the information it is necessary to enable the initial code image objects color. The article shows the results of the implementation of the algorithm of encoding object identifiers. To simplify the process of building drawings of any kind, and reduce time-consuming, method of drawing objects identification is proposed based on the use as the ID information of the object color.
Positioning graphical objects on computer screens: a three-phase model.
Pastel, Robert
2011-02-01
This experiment identifies and models phases during the positioning of graphical objects (called cursors in this article) on computer displays. The human computer-interaction community has traditionally used Fitts' law to model selection in graphical user interfaces, whereas human factors experiments have found the single-component Fitts' law inadequate to model positioning of real objects. Participants (N=145) repeatedly positioned variably sized square cursors within variably sized rectangular targets using computer mice. The times for the cursor to just touch the target, for the cursor to enter the target, and for participants to indicate positioning completion were observed. The positioning tolerances were varied from very precise and difficult to imprecise and easy. The time for the cursor to touch the target was proportional to the initial cursor-target distance. The time for the cursor to completely enter the target after touching was proportional to the logarithms of cursor size divided by target tolerances. The time for participants to indicate positioning after entering was inversely proportional to the tolerance. A three-phase model defined by regions--distant, proximate, and inside the target--was proposed and could model the positioning tasks. The three-phase model provides a framework for ergonomists to evaluate new positioning techniques and can explain their deficiencies. The model provides a means to analyze tasks and enhance interaction during positioning.
An equalised global graphical model-based approach for multi-camera object tracking
Chen, Weihua; Cao, Lijun; Chen, Xiaotang; Huang, Kaiqi
2015-01-01
Non-overlapping multi-camera visual object tracking typically consists of two steps: single camera object tracking and inter-camera object tracking. Most of tracking methods focus on single camera object tracking, which happens in the same scene, while for real surveillance scenes, inter-camera object tracking is needed and single camera tracking methods can not work effectively. In this paper, we try to improve the overall multi-camera object tracking performance by a global graph model with...
DEFF Research Database (Denmark)
Højsgaard, Søren; Edwards, David; Lauritzen, Steffen L.
of these software developments have taken place within the R community, either in the form of new packages or by providing an R ingerface to existing software. This book attempts to give the reader a gentle introduction to graphical modeling using R and the main features of some of these packages. In addition......, the book provides examples of how more advanced aspects of graphical modeling can be represented and handled within R. Topics covered in the seven chapters include graphical models for contingency tables, Gaussian and mixed graphical models, Bayesian networks and modeling high dimensional data...
Højsgaard, Søren; Lauritzen, Steffen
2012-01-01
Graphical models in their modern form have been around since the late 1970s and appear today in many areas of the sciences. Along with the ongoing developments of graphical models, a number of different graphical modeling software programs have been written over the years. In recent years many of these software developments have taken place within the R community, either in the form of new packages or by providing an R interface to existing software. This book attempts to give the reader a gentle introduction to graphical modeling using R and the main features of some of these packages. In add
Object-oriented graphics programming in C++
Stevens, Roger T
2014-01-01
Object-Oriented Graphics Programming in C++ provides programmers with the information needed to produce realistic pictures on a PC monitor screen.The book is comprised of 20 chapters that discuss the aspects of graphics programming in C++. The book starts with a short introduction discussing the purpose of the book. It also includes the basic concepts of programming in C++ and the basic hardware requirement. Subsequent chapters cover related topics in C++ programming such as the various display modes; displaying TGA files, and the vector class. The text also tackles subjects on the processing
Graphical models for genetic analyses
DEFF Research Database (Denmark)
Lauritzen, Steffen Lilholt; Sheehan, Nuala A.
2003-01-01
This paper introduces graphical models as a natural environment in which to formulate and solve problems in genetics and related areas. Particular emphasis is given to the relationships among various local computation algorithms which have been developed within the hitherto mostly separate areas...... of graphical models and genetics. The potential of graphical models is explored and illustrated through a number of example applications where the genetic element is substantial or dominating....
Object oriented programming for computer graphics and flow visualization
Vucinic, Dean
If OOP (Object Oriented Programming) is to be effective, a language and the library of software components (class library) have to be available. A language which is progressively and consistently gaining approval is the C++ because of its efficiency and support for OOP. A survey of C++ main features is presented along with some short examples showing how to use these featres effectively. OOP concepts implemented through C++ simplify the code structure and make it easier to debug and understand. More detailed examples related to computer graphics and flow visualization class implementations are given to explain the fundamentals of OOP and its advantages, based on the development of the object oriented model of PHIGS (Progammer's Hierarchical Interactive Graphics Standard) graphics library and the application of InterViews (an object oriented toolkit running on top of X Window System) for the implementation of Graphical User Interaces (GUI). The productivity gain obtained by using OOP in the software development process is starting to be recognized and its economic impact is becomming a major factor in software engineering.
Transforming Graphical System Models to Graphical Attack Models
DEFF Research Database (Denmark)
Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, Rene Rydhof
2016-01-01
Manually identifying possible attacks on an organisation is a complex undertaking; many different factors must be considered, and the resulting attack scenarios can be complex and hard to maintain as the organisation changes. System models provide a systematic representation of organisations...... that helps in structuring attack identification and can integrate physical, virtual, and social components. These models form a solid basis for guiding the manual identification of attack scenarios. Their main benefit, however, is in the analytic generation of attacks. In this work we present a systematic...... approach to transforming graphical system models to graphical attack models in the form of attack trees. Based on an asset in the model, our transformations result in an attack tree that represents attacks by all possible actors in the model, after which the actor in question has obtained the asset....
DEFF Research Database (Denmark)
Jensen, Finn Verner; Nielsen, Thomas Dyhre
2016-01-01
is largely due to the availability of efficient inference algorithms for answering probabilistic queries about the states of the variables in the network. Furthermore, to support the construction of Bayesian network models, learning algorithms are also available. We give an overview of the Bayesian network...
Modeling chemical kinetics graphically
Heck, A.
2012-01-01
In literature on chemistry education it has often been suggested that students, at high school level and beyond, can benefit in their studies of chemical kinetics from computer supported activities. Use of system dynamics modeling software is one of the suggested quantitative approaches that could
Learning Graphical Models With Hubs
Tan, Kean Ming; London, Palma; Mohan, Karthik; Lee, Su-In; Fazel, Maryam; Witten, Daniela
2014-01-01
We consider the problem of learning a high-dimensional graphical model in which there are a few hub nodes that are densely-connected to many other nodes. Many authors have studied the use of an ℓ1 penalty in order to learn a sparse graph in the high-dimensional setting. However, the ℓ1 penalty implicitly assumes that each edge is equally likely and independent of all other edges. We propose a general framework to accommodate more realistic networks with hub nodes, using a convex formulation that involves a row-column overlap norm penalty. We apply this general framework to three widely-used probabilistic graphical models: the Gaussian graphical model, the covariance graph model, and the binary Ising model. An alternating direction method of multipliers algorithm is used to solve the corresponding convex optimization problems. On synthetic data, we demonstrate that our proposed framework outperforms competitors that do not explicitly model hub nodes. We illustrate our proposal on a webpage data set and a gene expression data set. PMID:25620891
Learning Graphical Models With Hubs.
Tan, Kean Ming; London, Palma; Mohan, Karthik; Lee, Su-In; Fazel, Maryam; Witten, Daniela
2014-10-01
We consider the problem of learning a high-dimensional graphical model in which there are a few hub nodes that are densely-connected to many other nodes. Many authors have studied the use of an ℓ 1 penalty in order to learn a sparse graph in the high-dimensional setting. However, the ℓ 1 penalty implicitly assumes that each edge is equally likely and independent of all other edges. We propose a general framework to accommodate more realistic networks with hub nodes, using a convex formulation that involves a row-column overlap norm penalty. We apply this general framework to three widely-used probabilistic graphical models: the Gaussian graphical model, the covariance graph model, and the binary Ising model. An alternating direction method of multipliers algorithm is used to solve the corresponding convex optimization problems. On synthetic data, we demonstrate that our proposed framework outperforms competitors that do not explicitly model hub nodes. We illustrate our proposal on a webpage data set and a gene expression data set.
Graphical interpretation of numerical model results
International Nuclear Information System (INIS)
Drewes, D.R.
1979-01-01
Computer software has been developed to produce high quality graphical displays of data from a numerical grid model. The code uses an existing graphical display package (DISSPLA) and overcomes some of the problems of both line-printer output and traditional graphics. The software has been designed to be flexible enough to handle arbitrarily placed computation grids and a variety of display requirements
Graphical models and their (un)certainties
Leisink, M.A.R.
2004-01-01
'A graphical models is a powerful tool to deal with complex probability models. Although in principle any set of probabilistic relationships can be modelled, the calculation of the actual numbers can be very hard. Every graphical model suffers from a phenomenon known as exponential scaling. To
Mastering probabilistic graphical models using Python
Ankan, Ankur
2015-01-01
If you are a researcher or a machine learning enthusiast, or are working in the data science field and have a basic idea of Bayesian learning or probabilistic graphical models, this book will help you to understand the details of graphical models and use them in your data science problems.
Graphical modelling software in R - status
DEFF Research Database (Denmark)
Dethlefsen, Claus; Højsgaard, Søren; Lauritzen, Steffen L.
2007-01-01
, and Kreiner 1995), MIM (Edwards 2000), and Tetrad (Glymour, Scheines, Spirtes, and Kelley 1987). The gR initiative (Lauritzen 2002) aims at making graphical models available in R (R Development Core Team 2006). A small grant from the Danish Science Foundation supported this initiative. We will summarize...... the results of the initiative so far. Specifically we will illustrate some of the R packages for graphical modelling currently on CRAN and discuss their strengths and weaknesses....
Probabilistic graphical model representation in phylogenetics.
Höhna, Sebastian; Heath, Tracy A; Boussau, Bastien; Landis, Michael J; Ronquist, Fredrik; Huelsenbeck, John P
2014-09-01
Recent years have seen a rapid expansion of the model space explored in statistical phylogenetics, emphasizing the need for new approaches to statistical model representation and software development. Clear communication and representation of the chosen model is crucial for: (i) reproducibility of an analysis, (ii) model development, and (iii) software design. Moreover, a unified, clear and understandable framework for model representation lowers the barrier for beginners and nonspecialists to grasp complex phylogenetic models, including their assumptions and parameter/variable dependencies. Graphical modeling is a unifying framework that has gained in popularity in the statistical literature in recent years. The core idea is to break complex models into conditionally independent distributions. The strength lies in the comprehensibility, flexibility, and adaptability of this formalism, and the large body of computational work based on it. Graphical models are well-suited to teach statistical models, to facilitate communication among phylogeneticists and in the development of generic software for simulation and statistical inference. Here, we provide an introduction to graphical models for phylogeneticists and extend the standard graphical model representation to the realm of phylogenetics. We introduce a new graphical model component, tree plates, to capture the changing structure of the subgraph corresponding to a phylogenetic tree. We describe a range of phylogenetic models using the graphical model framework and introduce modules to simplify the representation of standard components in large and complex models. Phylogenetic model graphs can be readily used in simulation, maximum likelihood inference, and Bayesian inference using, for example, Metropolis-Hastings or Gibbs sampling of the posterior distribution. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
Item Screening in Graphical Loglinear Rasch Models
DEFF Research Database (Denmark)
Kreiner, Svend; Christensen, Karl Bang
2011-01-01
In behavioural sciences, local dependence and DIF are common, and purification procedures that eliminate items with these weaknesses often result in short scales with poor reliability. Graphical loglinear Rasch models (Kreiner & Christensen, in Statistical Methods for Quality of Life Studies, ed....... by M. Mesbah, F.C. Cole & M.T. Lee, Kluwer Academic, pp. 187–203, 2002) where uniform DIF and uniform local dependence are permitted solve this dilemma by modelling the local dependence and DIF. Identifying loglinear Rasch models by a stepwise model search is often very time consuming, since...... the initial item analysis may disclose a great deal of spurious and misleading evidence of DIF and local dependence that has to disposed of during the modelling procedure. Like graphical models, graphical loglinear Rasch models possess Markov properties that are useful during the statistical analysis...
Cheng, Bin; Lei, Guangming; Wang, Yahong
2010-08-01
This strong interest to vectorization and recognition for engineering drawings is driven by a wide spectrum of promising applications in many areas such as 3D reconstruction, information interchange, content-based retrieval to engineering drawings, 2D understanding for engineering drawings and so on. This paper reviews development level and recognition methods of vectorization and recognition of circular-typed graphics object for engineering drawings, and provides a comprehensive survey of research history and the state of the art in recognition conversion of circular-typed graphics object for engineering drawings. The approach of conversion circular-typed graphics object are divided into fitting-based methods, vector-based methods, global-based methods and integrated-based methods. The advantage and disadvantage of all existing model of circular graphics object are analyzed and compared. Research difficulties and development tendency in future are clearly discussed for engineering drawings. At the end of this survey, some detailed discussions on research challenges and future directions of vectorization and recognition for engineering drawings are also provided.
Graphical Model Theory for Wireless Sensor Networks
International Nuclear Information System (INIS)
Davis, William B.
2002-01-01
Information processing in sensor networks, with many small processors, demands a theory of computation that allows the minimization of processing effort, and the distribution of this effort throughout the network. Graphical model theory provides a probabilistic theory of computation that explicitly addresses complexity and decentralization for optimizing network computation. The junction tree algorithm, for decentralized inference on graphical probability models, can be instantiated in a variety of applications useful for wireless sensor networks, including: sensor validation and fusion; data compression and channel coding; expert systems, with decentralized data structures, and efficient local queries; pattern classification, and machine learning. Graphical models for these applications are sketched, and a model of dynamic sensor validation and fusion is presented in more depth, to illustrate the junction tree algorithm
Graphical Model Theory for Wireless Sensor Networks
Energy Technology Data Exchange (ETDEWEB)
Davis, William B.
2002-12-08
Information processing in sensor networks, with many small processors, demands a theory of computation that allows the minimization of processing effort, and the distribution of this effort throughout the network. Graphical model theory provides a probabilistic theory of computation that explicitly addresses complexity and decentralization for optimizing network computation. The junction tree algorithm, for decentralized inference on graphical probability models, can be instantiated in a variety of applications useful for wireless sensor networks, including: sensor validation and fusion; data compression and channel coding; expert systems, with decentralized data structures, and efficient local queries; pattern classification, and machine learning. Graphical models for these applications are sketched, and a model of dynamic sensor validation and fusion is presented in more depth, to illustrate the junction tree algorithm.
Transforming Graphical System Models To Graphical Attack Models
Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof; Kammüller, Florian; Mauw, S.; Kordy, B.
2015-01-01
Manually identifying possible attacks on an organisation is a complex undertaking; many different factors must be considered, and the resulting attack scenarios can be complex and hard to maintain as the organisation changes. System models provide a systematic representation of organisations that
Graphical Model Debugger Framework for Embedded Systems
DEFF Research Database (Denmark)
Zeng, Kebin
2010-01-01
Model Driven Software Development has offered a faster way to design and implement embedded real-time software by moving the design to a model level, and by transforming models to code. However, the testing of embedded systems has remained at the code level. This paper presents a Graphical Model...... Debugger Framework, providing an auxiliary avenue of analysis of system models at runtime by executing generated code and updating models synchronously, which allows embedded developers to focus on the model level. With the model debugger, embedded developers can graphically test their design model...... and check the running status of the system, which offers a debugging capability on a higher level of abstraction. The framework intends to contribute a tool to the Eclipse society, especially suitable for model-driven development of embedded systems....
Building probabilistic graphical models with Python
Karkera, Kiran R
2014-01-01
This is a short, practical guide that allows data scientists to understand the concepts of Graphical models and enables them to try them out using small Python code snippets, without being too mathematically complicated. If you are a data scientist who knows about machine learning and want to enhance your knowledge of graphical models, such as Bayes network, in order to use them to solve real-world problems using Python libraries, this book is for you. This book is intended for those who have some Python and machine learning experience, or are exploring the machine learning field.
Planar graphical models which are easy
Energy Technology Data Exchange (ETDEWEB)
Chertkov, Michael [Los Alamos National Laboratory; Chernyak, Vladimir [WAYNE STATE UNIV
2009-01-01
We describe a rich family of binary variables statistical mechanics models on planar graphs which are equivalent to Gaussian Grassmann Graphical models (free fermions). Calculation of partition function (weighted counting) in the models is easy (of polynomial complexity) as reduced to evaluation of determinants of matrixes linear in the number of variables. In particular, this family of models covers Holographic Algorithms of Valiant and extends on the Gauge Transformations discussed in our previous works.
Probabilistic reasoning with graphical security models
Kordy, Barbara; Pouly, Marc; Schweitzer, Patrick
This work provides a computational framework for meaningful probabilistic evaluation of attack–defense scenarios involving dependent actions. We combine the graphical security modeling technique of attack–defense trees with probabilistic information expressed in terms of Bayesian networks. In order
Efficiently adapting graphical models for selectivity estimation
DEFF Research Database (Denmark)
Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.
2013-01-01
in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...
Graphical models for inferring single molecule dynamics
Directory of Open Access Journals (Sweden)
Gonzalez Ruben L
2010-10-01
Full Text Available Abstract Background The recent explosion of experimental techniques in single molecule biophysics has generated a variety of novel time series data requiring equally novel computational tools for analysis and inference. This article describes in general terms how graphical modeling may be used to learn from biophysical time series data using the variational Bayesian expectation maximization algorithm (VBEM. The discussion is illustrated by the example of single-molecule fluorescence resonance energy transfer (smFRET versus time data, where the smFRET time series is modeled as a hidden Markov model (HMM with Gaussian observables. A detailed description of smFRET is provided as well. Results The VBEM algorithm returns the model’s evidence and an approximating posterior parameter distribution given the data. The former provides a metric for model selection via maximum evidence (ME, and the latter a description of the model’s parameters learned from the data. ME/VBEM provide several advantages over the more commonly used approach of maximum likelihood (ML optimized by the expectation maximization (EM algorithm, the most important being a natural form of model selection and a well-posed (non-divergent optimization problem. Conclusions The results demonstrate the utility of graphical modeling for inference of dynamic processes in single molecule biophysics.
Stochastic Spectral Descent for Discrete Graphical Models
International Nuclear Information System (INIS)
Carlson, David; Hsieh, Ya-Ping; Collins, Edo; Carin, Lawrence; Cevher, Volkan
2015-01-01
Interest in deep probabilistic graphical models has in-creased in recent years, due to their state-of-the-art performance on many machine learning applications. Such models are typically trained with the stochastic gradient method, which can take a significant number of iterations to converge. Since the computational cost of gradient estimation is prohibitive even for modestly sized models, training becomes slow and practically usable models are kept small. In this paper we propose a new, largely tuning-free algorithm to address this problem. Our approach derives novel majorization bounds based on the Schatten- norm. Intriguingly, the minimizers of these bounds can be interpreted as gradient methods in a non-Euclidean space. We thus propose using a stochastic gradient method in non-Euclidean space. We both provide simple conditions under which our algorithm is guaranteed to converge, and demonstrate empirically that our algorithm leads to dramatically faster training and improved predictive ability compared to stochastic gradient descent for both directed and undirected graphical models.
GENI: A graphical environment for model-based control
International Nuclear Information System (INIS)
Kleban, S.; Lee, M.; Zambre, Y.
1989-10-01
A new method to operate machine and beam simulation programs for accelerator control has been developed. Existing methods, although cumbersome, have been used in control systems for commissioning and operation of many machines. We developed GENI, a generalized graphical interface to these programs for model-based control. This ''object-oriented''-like environment is described and some typical applications are presented. 4 refs., 5 figs
An object-oriented implementation of a graphical-programming system
International Nuclear Information System (INIS)
Cunningham, G.S.; Hanson, K.M.; Jennings, G.R. Jr.; Wolf, D.R.
1994-01-01
Object-oriented (OO) analysis, design, and programming is a powerful paradigm for creating software that is easily understood, modified, and maintained. In this paper the authors demonstrate how the OO concepts of abstraction, inheritance, encapsulation, polymorphism, and dynamic binding have aided in the design of a graphical-programming tool. The tool that they have developed allows a user to build radiographic system models for computing simulated radiographic data. It will eventually be used to perform Bayesian reconstructions of objects given radiographic data. The models are built by connecting icons that represent physical transformations, such as line integrals, exponentiation, and convolution, on a canvas. They will also briefly discuss ParcPlace's application development environment, VisualWorks, which they have found to be as helpful as the OO paradigm
Spectron: Graphical Model for Interacting With Timbre
Directory of Open Access Journals (Sweden)
Daniel Gómez
2009-06-01
Full Text Available The algorithms for creating and manipulating sound by electronic or digital means have grown in number and complexity since the creation of the first analog synthesizers. The techniques for visualizing these synthesis models have not increasingly grown with synthesizers, neither in hardware nor in software. In this paper, the possibilities to graphically represent and control timbre are presented, based on displaying the parameters involved in its synthesis model. A very simple data set was extracted from a commercial subtractive synthesizer and analyzed in two different approaches, dimensionality reduction and abstract data visualization. The results of these two different approaches were used as leads to design a synthesizer prototype: the Spectron synthesizer. This prototype uses an Amplitude vs. Frequency graphic as it´s main interface to give information about the timbre and to interact with it, it´s control offers a simplification in the amount of variables of a classic oscillator and expands its possibilities to generate additional timbre.
Formal Analysis of Graphical Security Models
DEFF Research Database (Denmark)
Aslanyan, Zaruhi
The increasing usage of computer-based systems in almost every aspects of our daily life makes more and more dangerous the threat posed by potential attackers, and more and more rewarding a successful attack. Moreover, the complexity of these systems is also increasing, including physical devices......, software components and human actors interacting with each other to form so-called socio-technical systems. The importance of socio-technical systems to modern societies requires verifying their security properties formally, while their inherent complexity makes manual analyses impracticable. Graphical...... models for security offer an unrivalled opportunity to describe socio-technical systems, for they allow to represent different aspects like human behaviour, computation and physical phenomena in an abstract yet uniform manner. Moreover, these models can be assigned a formal semantics, thereby allowing...
Bayesian graphical models for genomewide association studies.
Verzilli, Claudio J; Stallard, Nigel; Whittaker, John C
2006-07-01
As the extent of human genetic variation becomes more fully characterized, the research community is faced with the challenging task of using this information to dissect the heritable components of complex traits. Genomewide association studies offer great promise in this respect, but their analysis poses formidable difficulties. In this article, we describe a computationally efficient approach to mining genotype-phenotype associations that scales to the size of the data sets currently being collected in such studies. We use discrete graphical models as a data-mining tool, searching for single- or multilocus patterns of association around a causative site. The approach is fully Bayesian, allowing us to incorporate prior knowledge on the spatial dependencies around each marker due to linkage disequilibrium, which reduces considerably the number of possible graphical structures. A Markov chain-Monte Carlo scheme is developed that yields samples from the posterior distribution of graphs conditional on the data from which probabilistic statements about the strength of any genotype-phenotype association can be made. Using data simulated under scenarios that vary in marker density, genotype relative risk of a causative allele, and mode of inheritance, we show that the proposed approach has better localization properties and leads to lower false-positive rates than do single-locus analyses. Finally, we present an application of our method to a quasi-synthetic data set in which data from the CYP2D6 region are embedded within simulated data on 100K single-nucleotide polymorphisms. Analysis is quick (<5 min), and we are able to localize the causative site to a very short interval.
Mining protein kinases regulation using graphical models.
Chen, Qingfeng; Chen, Yi-Ping Phoebe
2011-03-01
Abnormal kinase activity is a frequent cause of diseases, which makes kinases a promising pharmacological target. Thus, it is critical to identify the characteristics of protein kinases regulation by studying the activation and inhibition of kinase subunits in response to varied stimuli. Bayesian network (BN) is a formalism for probabilistic reasoning that has been widely used for learning dependency models. However, for high-dimensional discrete random vectors the set of plausible models becomes large and a full comparison of all the posterior probabilities related to the competing models becomes infeasible. A solution to this problem is based on the Markov Chain Monte Carlo (MCMC) method. This paper proposes a BN-based framework to discover the dependency correlations of kinase regulation. Our approach is to apply the MCMC method to generate a sequence of samples from a probability distribution, by which to approximate the distribution. The frequent connections (edges) are identified from the obtained sampling graphical models. Our results point to a number of novel candidate regulation patterns that are interesting in biology and include inferred associations that were unknown.
Barnhardt, Brian; Rucker, Sean; Bearden, David A.; Barrera, Mark J.
1996-06-01
The simulation of developing complex systems requires flexibility to allow for changing system requirements and constraints. The object-oriented paradigm provides an environment suitable for establishing flexibility, rapid reconfiguration of new architectures, and integration of new models. This paper outlines the development and application of the brilliant eyes simulator (BESim), sponsored by the US Air FOrce Space and Missile Systems Center. BESim simulates the Space and Missile Tracking System, formerly known as Brilliant Eyes, which represents the low earth orbiting component of the space based infrared system. BESim has powerful tools for simulation setup and analysis of results. The pre-processor enables the user to specify system characteristics, output data collection, external data interfaces, and modeling fidelity. The post-processor consists of a graphical user interface which allows easy access to all simulation output in graphical or tabular form. This includes 2D and 3D graphical playback of performance results.
Real time natural object modeling framework
International Nuclear Information System (INIS)
Rana, H.A.; Shamsuddin, S.M.; Sunar, M.H.
2008-01-01
CG (Computer Graphics) is a key technology for producing visual contents. Currently computer generated imagery techniques are being developed and applied, particularly in the field of virtual reality applications, film production, training and flight simulators, to provide total composition of realistic computer graphic images. Natural objects like clouds are an integral feature of the sky without them synthetic outdoor scenes seem unrealistic. Modeling and animating such objects is a difficult task. Most systems are difficult to use, as they require adjustment of numerous, complex parameters and are non-interactive. This paper presents an intuitive, interactive system to artistically model, animate, and render visually convincing clouds using modern graphics hardware. A high-level interface models clouds through the visual use of cubes. Clouds are rendered by making use of hardware accelerated API -OpenGL. The resulting interactive design and rendering system produces perceptually convincing cloud models that can be used in any interactive system. (author)
EasyModeller: A graphical interface to MODELLER
Directory of Open Access Journals (Sweden)
Kuntal Bhusan K
2010-08-01
Full Text Available Abstract Background MODELLER is a program for automated protein Homology Modeling. It is one of the most widely used tool for homology or comparative modeling of protein three-dimensional structures, but most users find it a bit difficult to start with MODELLER as it is command line based and requires knowledge of basic Python scripting to use it efficiently. Findings The study was designed with an aim to develop of "EasyModeller" tool as a frontend graphical interface to MODELLER using Perl/Tk, which can be used as a standalone tool in windows platform with MODELLER and Python preinstalled. It helps inexperienced users to perform modeling, assessment, visualization, and optimization of protein models in a simple and straightforward way. Conclusion EasyModeller provides a graphical straight forward interface and functions as a stand-alone tool which can be used in a standard personal computer with Microsoft Windows as the operating system.
ModelMate - A graphical user interface for model analysis
Banta, Edward R.
2011-01-01
ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.
EasyModeller: A graphical interface to MODELLER.
Kuntal, Bhusan K; Aparoy, Polamarasetty; Reddanna, Pallu
2010-08-16
MODELLER is a program for automated protein Homology Modeling. It is one of the most widely used tool for homology or comparative modeling of protein three-dimensional structures, but most users find it a bit difficult to start with MODELLER as it is command line based and requires knowledge of basic Python scripting to use it efficiently. The study was designed with an aim to develop of "EasyModeller" tool as a frontend graphical interface to MODELLER using Perl/Tk, which can be used as a standalone tool in windows platform with MODELLER and Python preinstalled. It helps inexperienced users to perform modeling, assessment, visualization, and optimization of protein models in a simple and straightforward way. EasyModeller provides a graphical straight forward interface and functions as a stand-alone tool which can be used in a standard personal computer with Microsoft Windows as the operating system.
Link Prediction via Sparse Gaussian Graphical Model
Directory of Open Access Journals (Sweden)
Liangliang Zhang
2016-01-01
Full Text Available Link prediction is an important task in complex network analysis. Traditional link prediction methods are limited by network topology and lack of node property information, which makes predicting links challenging. In this study, we address link prediction using a sparse Gaussian graphical model and demonstrate its theoretical and practical effectiveness. In theory, link prediction is executed by estimating the inverse covariance matrix of samples to overcome information limits. The proposed method was evaluated with four small and four large real-world datasets. The experimental results show that the area under the curve (AUC value obtained by the proposed method improved by an average of 3% and 12.5% compared to 13 mainstream similarity methods, respectively. This method outperforms the baseline method, and the prediction accuracy is superior to mainstream methods when using only 80% of the training set. The method also provides significantly higher AUC values when using only 60% in Dolphin and Taro datasets. Furthermore, the error rate of the proposed method demonstrates superior performance with all datasets compared to mainstream methods.
Quantum Graphical Models and Belief Propagation
International Nuclear Information System (INIS)
Leifer, M.S.; Poulin, D.
2008-01-01
Belief Propagation algorithms acting on Graphical Models of classical probability distributions, such as Markov Networks, Factor Graphs and Bayesian Networks, are amongst the most powerful known methods for deriving probabilistic inferences amongst large numbers of random variables. This paper presents a generalization of these concepts and methods to the quantum case, based on the idea that quantum theory can be thought of as a noncommutative, operator-valued, generalization of classical probability theory. Some novel characterizations of quantum conditional independence are derived, and definitions of Quantum n-Bifactor Networks, Markov Networks, Factor Graphs and Bayesian Networks are proposed. The structure of Quantum Markov Networks is investigated and some partial characterization results are obtained, along the lines of the Hammersley-Clifford theorem. A Quantum Belief Propagation algorithm is presented and is shown to converge on 1-Bifactor Networks and Markov Networks when the underlying graph is a tree. The use of Quantum Belief Propagation as a heuristic algorithm in cases where it is not known to converge is discussed. Applications to decoding quantum error correcting codes and to the simulation of many-body quantum systems are described
Graphical tools for model selection in generalized linear models.
Murray, K; Heritier, S; Müller, S
2013-11-10
Model selection techniques have existed for many years; however, to date, simple, clear and effective methods of visualising the model building process are sparse. This article describes graphical methods that assist in the selection of models and comparison of many different selection criteria. Specifically, we describe for logistic regression, how to visualize measures of description loss and of model complexity to facilitate the model selection dilemma. We advocate the use of the bootstrap to assess the stability of selected models and to enhance our graphical tools. We demonstrate which variables are important using variable inclusion plots and show that these can be invaluable plots for the model building process. We show with two case studies how these proposed tools are useful to learn more about important variables in the data and how these tools can assist the understanding of the model building process. Copyright © 2013 John Wiley & Sons, Ltd.
Graphical modeling and query language for hospitals.
Barzdins, Janis; Barzdins, Juris; Rencis, Edgars; Sostaks, Agris
2013-01-01
So far there has been little evidence that implementation of the health information technologies (HIT) is leading to health care cost savings. One of the reasons for this lack of impact by the HIT likely lies in the complexity of the business process ownership in the hospitals. The goal of our research is to develop a business model-based method for hospital use which would allow doctors to retrieve directly the ad-hoc information from various hospital databases. We have developed a special domain-specific process modelling language called the MedMod. Formally, we define the MedMod language as a profile on UML Class diagrams, but we also demonstrate it on examples, where we explain the semantics of all its elements informally. Moreover, we have developed the Process Query Language (PQL) that is based on MedMod process definition language. The purpose of PQL is to allow a doctor querying (filtering) runtime data of hospital's processes described using MedMod. The MedMod language tries to overcome deficiencies in existing process modeling languages, allowing to specify the loosely-defined sequence of the steps to be performed in the clinical process. The main advantages of PQL are in two main areas - usability and efficiency. They are: 1) the view on data through "glasses" of familiar process, 2) the simple and easy-to-perceive means of setting filtering conditions require no more expertise than using spreadsheet applications, 3) the dynamic response to each step in construction of the complete query that shortens the learning curve greatly and reduces the error rate, and 4) the selected means of filtering and data retrieving allows to execute queries in O(n) time regarding the size of the dataset. We are about to continue developing this project with three further steps. First, we are planning to develop user-friendly graphical editors for the MedMod process modeling and query languages. The second step is to do evaluation of usability the proposed language and tool
Modelling Digital Media Objects
DEFF Research Database (Denmark)
Troelsgaard, Rasmus
The goal of this thesis is to investigate two relevant issues regarding computational representation and classification of digital multi-media objects. With a special focus on music, a model for representation of objects comprising multiple heterogeneous data types is investigated. Necessary...... to this work are considerations regarding integration of multiple diverse data modalities and evaluation of the resulting concept representation. Regarding modelling of data exhibiting certain sequential structure, a number of theoretical and empirical results are presented. These are results related to model....... The particular aspects considered in the publications are sound, song lyrics, and user-provided metadata. This model integrates the diverse data types comprising the objects and defines concrete unified representations in a joint “semantic” space. Within the context of this model, general measures of similarity...
PKgraph: an R package for graphically diagnosing population pharmacokinetic models.
Sun, Xiaoyong; Wu, Kai; Cook, Dianne
2011-12-01
Population pharmacokinetic (PopPK) modeling has become increasing important in drug development because it handles unbalanced design, sparse data and the study of individual variation. However, the increased complexity of the model makes it more of a challenge to diagnose the fit. Graphics can play an important and unique role in PopPK model diagnostics. The software described in this paper, PKgraph, provides a graphical user interface for PopPK model diagnosis. It also provides an integrated and comprehensive platform for the analysis of pharmacokinetic data including exploratory data analysis, goodness of model fit, model validation and model comparison. Results from a variety of modeling fitting software, including NONMEM, Monolix, SAS and R, can be used. PKgraph is programmed in R, and uses the R packages lattice, ggplot2 for static graphics, and rggobi for interactive graphics. Copyright Â© 2011 Elsevier Ireland Ltd. All rights reserved.
The complete guide to blender graphics computer modeling and animation
Blain, John M
2014-01-01
Smoothly Leads Users into the Subject of Computer Graphics through the Blender GUIBlender, the free and open source 3D computer modeling and animation program, allows users to create and animate models and figures in scenes, compile feature movies, and interact with the models and create video games. Reflecting the latest version of Blender, The Complete Guide to Blender Graphics: Computer Modeling & Animation, 2nd Edition helps beginners learn the basics of computer animation using this versatile graphics program. This edition incorporates many new features of Blender, including developments
A methodology for acquiring qualitative knowledge for probabilistic graphical models
DEFF Research Database (Denmark)
Kjærulff, Uffe Bro; Madsen, Anders L.
2004-01-01
We present a practical and general methodology that simplifies the task of acquiring and formulating qualitative knowledge for constructing probabilistic graphical models (PGMs). The methodology efficiently captures and communicates expert knowledge, and has significantly eased the model developm......We present a practical and general methodology that simplifies the task of acquiring and formulating qualitative knowledge for constructing probabilistic graphical models (PGMs). The methodology efficiently captures and communicates expert knowledge, and has significantly eased the model...
The gRbase Package for Graphical Modelling in R
DEFF Research Database (Denmark)
Højsgaard, Søren; Dethlefsen, Claus
We have developed a package, called , consisting of a number of classes and associated methods to support the analysis of data using graphical models. It is developed for the open source language, R, and is available for several platforms. The package is intended to be widely extendible...... and flexible so that package developers may implement further types of graphical models using the available methods. contains methods for representing data, specification of models using a formal language, and is linked to , an interactive graphical user interface for manipulating graphs. We show how...
Implementing the lattice Boltzmann model on commodity graphics hardware
International Nuclear Information System (INIS)
Kaufman, Arie; Fan, Zhe; Petkov, Kaloian
2009-01-01
Modern graphics processing units (GPUs) can perform general-purpose computations in addition to the native specialized graphics operations. Due to the highly parallel nature of graphics processing, the GPU has evolved into a many-core coprocessor that supports high data parallelism. Its performance has been growing at a rate of squared Moore's law, and its peak floating point performance exceeds that of the CPU by an order of magnitude. Therefore, it is a viable platform for time-sensitive and computationally intensive applications. The lattice Boltzmann model (LBM) computations are carried out via linear operations at discrete lattice sites, which can be implemented efficiently using a GPU-based architecture. Our simulations produce results comparable to the CPU version while improving performance by an order of magnitude. We have demonstrated that the GPU is well suited for interactive simulations in many applications, including simulating fire, smoke, lightweight objects in wind, jellyfish swimming in water, and heat shimmering and mirage (using the hybrid thermal LBM). We further advocate the use of a GPU cluster for large scale LBM simulations and for high performance computing. The Stony Brook Visual Computing Cluster has been the platform for several applications, including simulations of real-time plume dispersion in complex urban environments and thermal fluid dynamics in a pressurized water reactor. Major GPU vendors have been targeting the high performance computing market with GPU hardware implementations. Software toolkits such as NVIDIA CUDA provide a convenient development platform that abstracts the GPU and allows access to its underlying stream computing architecture. However, software programming for a GPU cluster remains a challenging task. We have therefore developed the Zippy framework to simplify GPU cluster programming. Zippy is based on global arrays combined with the stream programming model and it hides the low-level details of the
GRAPHIC REALIZATION FOUNDATIONS OF LOGIC-SEMANTIC MODELING IN DIDACTICS
Directory of Open Access Journals (Sweden)
V. E. Steinberg
2017-01-01
Full Text Available Introduction. Nowadays, there are not a lot of works devoted to a graphic method of logic-semantic modeling of knowledge. Meanwhile, an interest towards this method increases due to the fact of essential increase of the content of visual component in information and educational sources. The present publication is the authors’ contribution into the solution of the problem of search of new forms and means convenient for visual and logic perception of a training material, its assimilation, operating by elements of knowledge and their transformations.The aim of the research is to justify graphical implementation of the method of logic-semantic modeling of knowledge, presented by a natural language (training language and to show the possibilities of application of figurative and conceptual models in student teaching.Methodology and research methods. The research methodology is based on the specified activity-regulatory, system-multi-dimensional and structural-invariant approach and the principle of multidimensionality. The methodology the graphic realization of the logic-semantic models in learning technologies is based on didactic design using computer training programs.Results and scientific novelty. Social and anthropological-cultural adaptation bases of the method of logical-semantic knowledge modeling to the problems of didactics are established and reasoned: coordinate-invariant matrix structure is presented as the basis of logical-semantic models of figurative and conceptual nature; the possibilities of using such models as multifunctional didactic regulators – support schemes, navigation in the content of the educational material, educational activities carried out by navigators, etc., are shown. The characteristics of new teaching tools as objects of semiotics and didactic of regulators are considered; their place and role in the structure of the external and internal training curricula learning activities are pointed out
Multibody dynamics model building using graphical interfaces
Macala, Glenn A.
1989-01-01
In recent years, the extremely laborious task of manually deriving equations of motion for the simulation of multibody spacecraft dynamics has largely been eliminated. Instead, the dynamicist now works with commonly available general purpose dynamics simulation programs which generate the equations of motion either explicitly or implicitly via computer codes. The user interface to these programs has predominantly been via input data files, each with its own required format and peculiarities, causing errors and frustrations during program setup. Recent progress in a more natural method of data input for dynamics programs: the graphical interface, is described.
A probabilistic graphical model based stochastic input model construction
International Nuclear Information System (INIS)
Wan, Jiang; Zabaras, Nicholas
2014-01-01
Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media
Optimal covariance selection for estimation using graphical models
Vichik, Sergey; Oshman, Yaakov
2011-01-01
We consider a problem encountered when trying to estimate a Gaussian random field using a distributed estimation approach based on Gaussian graphical models. Because of constraints imposed by estimation tools used in Gaussian graphical models, the a priori covariance of the random field is constrained to embed conditional independence constraints among a significant number of variables. The problem is, then: given the (unconstrained) a priori covariance of the random field, and the conditiona...
MAGIC: Model and Graphic Information Converter
Herbert, W. C.
2009-01-01
MAGIC is a software tool capable of converting highly detailed 3D models from an open, standard format, VRML 2.0/97, into the proprietary DTS file format used by the Torque Game Engine from GarageGames. MAGIC is used to convert 3D simulations from authoritative sources into the data needed to run the simulations in NASA's Distributed Observer Network. The Distributed Observer Network (DON) is a simulation presentation tool built by NASA to facilitate the simulation sharing requirements of the Data Presentation and Visualization effort within the Constellation Program. DON is built on top of the Torque Game Engine (TGE) and has chosen TGE's Dynamix Three Space (DTS) file format to represent 3D objects within simulations.
An integrated introduction to computer graphics and geometric modeling
Goldman, Ronald
2009-01-01
… this book may be the first book on geometric modelling that also covers computer graphics. In addition, it may be the first book on computer graphics that integrates a thorough introduction to 'freedom' curves and surfaces and to the mathematical foundations for computer graphics. … the book is well suited for an undergraduate course. … The entire book is very well presented and obviously written by a distinguished and creative researcher and educator. It certainly is a textbook I would recommend. …-Computer-Aided Design, 42, 2010… Many books concentrate on computer programming and soon beco
Object models and object representation Tutorial 4
CERN. Geneva; Mahey, Mahendra
2007-01-01
This tutorial will provide a practical overview of current practices in modelling complex or compound digital objects. It will examine some of the key scenarios around creating complex objects and will explore a number of approaches to packaging and transport. Taking research papers, or scholarly works, as an example, the tutorial will explore the different ways in which these, and their descriptive metadata, can be treated as complex objects. Relevant application profiles and metadata formats will be introduced and compared, such as Dublin Core, in particular the DCMI Abstract Model, and MODS, alongside content packaging standards, such as METS MPEG 21 DIDL and IMS CP. Finally, we will consider some future issues and activities that are seeking to address these. The tutorial will be of interest to librarians and technical staff with an interest in metadata or complex objects, their creation, management and re-use.
Integrating Surface Modeling into the Engineering Design Graphics Curriculum
Hartman, Nathan W.
2006-01-01
It has been suggested there is a knowledge base that surrounds the use of 3D modeling within the engineering design process and correspondingly within engineering design graphics education. While solid modeling receives a great deal of attention and discussion relative to curriculum efforts, and rightly so, surface modeling is an equally viable 3D…
JACK - ANTHROPOMETRIC MODELING SYSTEM FOR SILICON GRAPHICS WORKSTATIONS
Smith, B.
1994-01-01
JACK is an interactive graphics program developed at the University of Pennsylvania that displays and manipulates articulated geometric figures. JACK is typically used to observe how a human mannequin interacts with its environment and what effects body types will have upon the performance of a task in a simulated environment. Any environment can be created, and any number of mannequins can be placed anywhere in that environment. JACK includes facilities to construct limited geometric objects, position figures, perform a variety of analyses on the figures, describe the motion of the figures and specify lighting and surface property information for rendering high quality images. JACK is supplied with a variety of body types pre-defined and known to the system. There are both male and female bodies, ranging from the 5th to the 95th percentile, based on NASA Standard 3000. Each mannequin is fully articulated and reflects the joint limitations of a normal human. JACK is an editor for manipulating previously defined objects known as "Peabody" objects. Used to describe the figures as well as the internal data structure for representing them, Peabody is a language with a powerful and flexible mechanism for representing connectivity between objects, both the joints between individual segments within a figure and arbitrary connections between different figures. Peabody objects are generally comprised of several individual figures, each one a collection of segments. Each segment has a geometry represented by PSURF files that consist of polygons or curved surface patches. Although JACK does not have the capability to create new objects, objects may be created by other geometric modeling programs and then translated into the PSURF format. Environment files are a collection of figures and attributes that may be dynamically moved under the control of an animation file. The animation facilities allow the user to create a sequence of commands that duplicate the movements of a
The appliance of graphics modeling in nuclear plant information system
International Nuclear Information System (INIS)
Bai Zhe; Li Guofang
2010-01-01
The nuclear plants contain a lot of sub-system, such as operation management, manufacture system, inventory system, human resource system and so forth. The standardized data graphics modeling technology can ensure the data interaction, compress the design cycle, avoid the replicated design, ensure the data integrity and consistent. The standardized data format which is on the basis of STEP standard and complied with XML is competent tool in different sub-system of nuclear plants. In order to meet this demand, a data graphics modeling standard is proposed. It is shown the relationship between systems, in system, between data by the standard. The graphic modeling effectively improves the performance between systems, designers, engineers, operations, supports department. It also provides the reliable and available data source for data mining and business intelligence. (authors)
Engineering graphic modelling a workbook for design engineers
Tjalve, E; Frackmann Schmidt, F
2013-01-01
Engineering Graphic Modelling: A Practical Guide to Drawing and Design covers how engineering drawing relates to the design activity. The book describes modeled properties, such as the function, structure, form, material, dimension, and surface, as well as the coordinates, symbols, and types of projection of the drawing code. The text provides drawing techniques, such as freehand sketching, bold freehand drawing, drawing with a straightedge, a draughting machine or a plotter, and use of templates, and then describes the types of drawing. Graphic designers, design engineers, mechanical engine
International Nuclear Information System (INIS)
Gao Wenhuan; Fu Changqing; Kang Kejun
1993-01-01
X Window is a network-oriented and network transparent windowing system, and now dominant in the Unix domain. The object-oriented programming technology can be used to change the extensibility of a software system remarkably. An introduction to graphics user interface is given. And how to develop a graphics user interface for radiation information processing system with object-oriented programming technology, which is based on X Window and independent of application is described briefly
Graphical Tools for Linear Structural Equation Modeling
2014-06-01
regression coefficient βS A.CQ1 van- ishes, which can be used to test whether the specification of Model 2 is compatible with the data. Most...because they are all compatible with the graph in Figure 19a, which displays the skeleton and v-structures. Note that we cannot reverse the edge from...im- plications of linear structual equation models. R-428, <http://ftp.cs.ucla.edu/pub/stat_ser/r428.pdf>, CA. To ap- pear in Proceedings of AAAI-2014
Developing a CAI Graphics Simulation Model: Guidelines.
Strickland, R. Mack; Poe, Stephen E.
1989-01-01
Discusses producing effective instructional software using a balance of course content and technological capabilities. Describes six phases of an instructional development model: discovery, design, development, coding, documentation, and delivery. Notes that good instructional design should have learner/computer interaction, sequencing of…
Factorizing Probabilistic Graphical Models Using Co-occurrence Rate
Zhu, Zhemin
2011-01-01
Factorization is of fundamental importance in the area of Probabilistic Graphical Models (PGMs). In this paper, we theoretically develop a novel mathematical concept, \\textbf{C}o-occurrence \\textbf{R}ate (CR), for factorizing PGMs. CR has three obvious advantages: (1) CR provides a unified
Sparse Gaussian graphical mixture model | Lotsi | Afrika Statistika
African Journals Online (AJOL)
Abstract. This paper considers the problem of networks reconstruction from heterogeneous data using a Gaussian Graphical Mixture Model (GGMM). It is well known that parameter estimation in this context is challenging due to large numbers of variables coupled with the degenerate nature of the likelihood. We propose as ...
Graphical models for inference under outcome-dependent sampling
DEFF Research Database (Denmark)
Didelez, V; Kreiner, S; Keiding, N
2010-01-01
We consider situations where data have been collected such that the sampling depends on the outcome of interest and possibly further covariates, as for instance in case-control studies. Graphical models represent assumptions about the conditional independencies among the variables. By including...
Discrete Discriminant analysis based on tree-structured graphical models
DEFF Research Database (Denmark)
Perez de la Cruz, Gonzalo; Eslava, Guillermina
The purpose of this paper is to illustrate the potential use of discriminant analysis based on tree{structured graphical models for discrete variables. This is done by comparing its empirical performance using estimated error rates for real and simulated data. The results show that discriminant...
Sparse time series chain graphical models for reconstructing genetic networks
Abegaz, Fentaw; Wit, Ernst
We propose a sparse high-dimensional time series chain graphical model for reconstructing genetic networks from gene expression data parametrized by a precision matrix and autoregressive coefficient matrix. We consider the time steps as blocks or chains. The proposed approach explores patterns of
Methods for teaching geometric modelling and computer graphics
Energy Technology Data Exchange (ETDEWEB)
Rotkov, S.I.; Faitel`son, Yu. Ts.
1992-05-01
This paper considers methods for teaching the methods and algorithms of geometric modelling and computer graphics to programmers, designers and users of CAD and computer-aided research systems. There is a bibliography that can be used to prepare lectures and practical classes. 37 refs., 1 tab.
Interactive computer graphics for bio-stereochemical modelling
Indian Academy of Sciences (India)
Proc, Indian Acad. Sci., Vol. 87 A (Chem. Sci.), No. 4, April 1978, pp. 95-113, (e) printed in India. Interactive computer graphics for bio-stereochemical modelling. ROBERT REIN, SHLOMONIR, KAREN HAYDOCK and. ROBERTD MACELROY. Department of Experimental Pathology, Roswell Park Memorial Institute,. 666 Elm ...
Using probabilistic graphical models to reconstruct biological networks and linkage maps
Wang, Huange
2017-01-01
Probabilistic graphical models (PGMs) offer a conceptual architecture where biological and mathematical objects can be expressed with a common, intuitive formalism. This facilitates the joint development of statistical and computational tools for quantitative analysis of biological data. Over the
Graphical approach to model reduction for nonlinear biochemical networks.
Holland, David O; Krainak, Nicholas C; Saucerman, Jeffrey J
2011-01-01
Model reduction is a central challenge to the development and analysis of multiscale physiology models. Advances in model reduction are needed not only for computational feasibility but also for obtaining conceptual insights from complex systems. Here, we introduce an intuitive graphical approach to model reduction based on phase plane analysis. Timescale separation is identified by the degree of hysteresis observed in phase-loops, which guides a "concentration-clamp" procedure for estimating explicit algebraic relationships between species equilibrating on fast timescales. The primary advantages of this approach over Jacobian-based timescale decomposition are that: 1) it incorporates nonlinear system dynamics, and 2) it can be easily visualized, even directly from experimental data. We tested this graphical model reduction approach using a 25-variable model of cardiac β(1)-adrenergic signaling, obtaining 6- and 4-variable reduced models that retain good predictive capabilities even in response to new perturbations. These 6 signaling species appear to be optimal "kinetic biomarkers" of the overall β(1)-adrenergic pathway. The 6-variable reduced model is well suited for integration into multiscale models of heart function, and more generally, this graphical model reduction approach is readily applicable to a variety of other complex biological systems.
Graphical approach to model reduction for nonlinear biochemical networks.
Directory of Open Access Journals (Sweden)
David O Holland
Full Text Available Model reduction is a central challenge to the development and analysis of multiscale physiology models. Advances in model reduction are needed not only for computational feasibility but also for obtaining conceptual insights from complex systems. Here, we introduce an intuitive graphical approach to model reduction based on phase plane analysis. Timescale separation is identified by the degree of hysteresis observed in phase-loops, which guides a "concentration-clamp" procedure for estimating explicit algebraic relationships between species equilibrating on fast timescales. The primary advantages of this approach over Jacobian-based timescale decomposition are that: 1 it incorporates nonlinear system dynamics, and 2 it can be easily visualized, even directly from experimental data. We tested this graphical model reduction approach using a 25-variable model of cardiac β(1-adrenergic signaling, obtaining 6- and 4-variable reduced models that retain good predictive capabilities even in response to new perturbations. These 6 signaling species appear to be optimal "kinetic biomarkers" of the overall β(1-adrenergic pathway. The 6-variable reduced model is well suited for integration into multiscale models of heart function, and more generally, this graphical model reduction approach is readily applicable to a variety of other complex biological systems.
Analysis of local dependence and multidimensionality in graphical loglinear Rasch models
DEFF Research Database (Denmark)
Kreiner, Svend; Christensen, Karl Bang
local independence; multidimensionality, differential item functioning; uniform local dependency and DIF; graphical Rasch models; loglinear Rasch models......local independence; multidimensionality, differential item functioning; uniform local dependency and DIF; graphical Rasch models; loglinear Rasch models...
Analysis of Local Dependence and Multidimensionality in Graphical Loglinear Rasch Models
DEFF Research Database (Denmark)
Kreiner, Svend; Christensen, Karl Bang
2004-01-01
Local independence; Multidimensionality; Differential item functioning; Uniform local dependence and DIF; Graphical Rasch models; Loglinear Rasch model......Local independence; Multidimensionality; Differential item functioning; Uniform local dependence and DIF; Graphical Rasch models; Loglinear Rasch model...
Udupa, Jayaram K.; Odhner, Dewey; Falcao, Alexandre X.; Ciesielski, Krzysztof C.; Miranda, Paulo A. V.; Vaideeswaran, Pavithra; Mishra, Shipra; Grevera, George J.; Saboury, Babak; Torigian, Drew A.
2011-03-01
To make Quantitative Radiology (QR) a reality in routine clinical practice, computerized automatic anatomy recognition (AAR) becomes essential. As part of this larger goal, we present in this paper a novel fuzzy strategy for building bodywide group-wise anatomic models. They have the potential to handle uncertainties and variability in anatomy naturally and to be integrated with the fuzzy connectedness framework for image segmentation. Our approach is to build a family of models, called the Virtual Quantitative Human, representing normal adult subjects at a chosen resolution of the population variables (gender, age). Models are represented hierarchically, the descendents representing organs contained in parent organs. Based on an index of fuzziness of the models, 32 thorax data sets, and 10 organs defined in them, we found that the hierarchical approach to modeling can effectively handle the non-linear relationships in position, scale, and orientation that exist among organs in different patients.
Reasoning with probabilistic and deterministic graphical models exact algorithms
Dechter, Rina
2013-01-01
Graphical models (e.g., Bayesian and constraint networks, influence diagrams, and Markov decision processes) have become a central paradigm for knowledge representation and reasoning in both artificial intelligence and computer science in general. These models are used to perform many reasoning tasks, such as scheduling, planning and learning, diagnosis and prediction, design, hardware and software verification, and bioinformatics. These problems can be stated as the formal tasks of constraint satisfaction and satisfiability, combinatorial optimization, and probabilistic inference. It is well
Type-2 fuzzy graphical models for pattern recognition
Zeng, Jia
2015-01-01
This book discusses how to combine type-2 fuzzy sets and graphical models to solve a range of real-world pattern recognition problems such as speech recognition, handwritten Chinese character recognition, topic modeling as well as human action recognition. It covers these recent developments while also providing a comprehensive introduction to the fields of type-2 fuzzy sets and graphical models. Though primarily intended for graduate students, researchers and practitioners in fuzzy logic and pattern recognition, the book can also serve as a valuable reference work for researchers without any previous knowledge of these fields. Dr. Jia Zeng is a Professor at the School of Computer Science and Technology, Soochow University, China. Dr. Zhi-Qiang Liu is a Professor at the School of Creative Media, City University of Hong Kong, China.
Graphical Gaussian models with edge and vertex symmetries
DEFF Research Database (Denmark)
Højsgaard, Søren; Lauritzen, Steffen L
2008-01-01
We introduce new types of graphical Gaussian models by placing symmetry restrictions on the concentration or correlation matrix. The models can be represented by coloured graphs, where parameters that are associated with edges or vertices of the same colour are restricted to being identical. We...... study the properties of such models and derive the necessary algorithms for calculating maximum likelihood estimates. We identify conditions for restrictions on the concentration and correlation matrices being equivalent. This is for example the case when symmetries are generated by permutation...
Space Object Collision Probability via Monte Carlo on the Graphics Processing Unit
Vittaldev, Vivek; Russell, Ryan P.
2017-09-01
Fast and accurate collision probability computations are essential for protecting space assets. Monte Carlo (MC) simulation is the most accurate but computationally intensive method. A Graphics Processing Unit (GPU) is used to parallelize the computation and reduce the overall runtime. Using MC techniques to compute the collision probability is common in literature as the benchmark. An optimized implementation on the GPU, however, is a challenging problem and is the main focus of the current work. The MC simulation takes samples from the uncertainty distributions of the Resident Space Objects (RSOs) at any time during a time window of interest and outputs the separations at closest approach. Therefore, any uncertainty propagation method may be used and the collision probability is automatically computed as a function of RSO collision radii. Integration using a fixed time step and a quartic interpolation after every Runge Kutta step ensures that no close approaches are missed. Two orders of magnitude speedups over a serial CPU implementation are shown, and speedups improve moderately with higher fidelity dynamics. The tool makes the MC approach tractable on a single workstation, and can be used as a final product, or for verifying surrogate and analytical collision probability methods.
A Model for Concurrent Objects
DEFF Research Database (Denmark)
Sørensen, Morten U.
1996-01-01
We present a model for concurrent objects where obejcts interact by taking part in common events that are closely matched to form call-response pairs, resulting in resulting in rendez-vous like communications. Objects are built from primitive objects by parallel composition, encapsulation and hid...... and hiding. The behavour of a composite object is straightforwardly derived from the behavour of the constituent objects. Defining refinement as a strengthened form of trace inclusion, object composition and refinement togehter form a basis for step-wise development....
Object Modeling and Building Information Modeling
Auråen, Hege; Gjemdal, Hanne
2016-01-01
The main part of this thesis is an online course (Small Private Online Course) entitled "Introduction to Object Modeling and Building Information Modeling". This supplementary report clarifies the choices made in the process of developing the course. The course examines the basic concepts of object modeling, modeling techniques and a modeling language (UML). Further, building information modeling (BIM) is presented as a modeling process, and the object modeling concepts in the BIM softw...
Word-level language modeling for P300 spellers based on discriminative graphical models
Delgado Saa, Jaime F.; de Pesters, Adriana; McFarland, Dennis; Çetin, Müjdat
2015-04-01
Objective. In this work we propose a probabilistic graphical model framework that uses language priors at the level of words as a mechanism to increase the performance of P300-based spellers. Approach. This paper is concerned with brain-computer interfaces based on P300 spellers. Motivated by P300 spelling scenarios involving communication based on a limited vocabulary, we propose a probabilistic graphical model framework and an associated classification algorithm that uses learned statistical models of language at the level of words. Exploiting such high-level contextual information helps reduce the error rate of the speller. Main results. Our experimental results demonstrate that the proposed approach offers several advantages over existing methods. Most importantly, it increases the classification accuracy while reducing the number of times the letters need to be flashed, increasing the communication rate of the system. Significance. The proposed approach models all the variables in the P300 speller in a unified framework and has the capability to correct errors in previous letters in a word, given the data for the current one. The structure of the model we propose allows the use of efficient inference algorithms, which in turn makes it possible to use this approach in real-time applications.
Graphics-based nuclear facility modeling and management
International Nuclear Information System (INIS)
Rod, S.R.
1991-07-01
Nuclear waste management facilities are characterized by their complexity, many unprecedented features, and numerous competing design requirements. This paper describes the development of comprehensive descriptive databases and three-dimensional models of nuclear waste management facilities and applies the database/model to an example facility. The important features of the facility database/model are its abilities to (1) process large volumes of site data, plant data, and nuclear material inventory data in an efficient, integrated manner; (2) produce many different representations of the data to fulfill information needs as they arise; (3) create a complete three-dimensional solid model of the plant with all related information readily accessible; and (4) support complete, consistent inventory control and plant configuration control. While the substantive heart of the system is the database, graphic visualization of the data vastly improves the clarity of the information presented. Graphic representations are a convenient framework for the presentation of plant and inventory data, allowing all types of information to be readily located and presented in a manner that is easily understood. 2 refs., 5 figs., 1 tab
Markov chain Monte Carlo methods in directed graphical models
DEFF Research Database (Denmark)
Højbjerre, Malene
have primarily been based on a Bayesian paradigm, i.e. prior information on the parameters is a prerequisite, but questions about undesirable side effects from the priors are raised. We present a method, based on MCMC methods, that approximates profile log-likelihood functions in directed graphical...... a tendency to foetal loss is heritable. The data possess a complicated dependence structure due to replicate pregnancies for the same woman, and a given family pattern. We conclude that a tendency to foetal loss is heritable. The model is of great interest in genetic epidemiology, because it considers both...
Lightweight Graphical Models for Selectivity Estimation Without Independence Assumptions
DEFF Research Database (Denmark)
Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.
2011-01-01
, propagated exponentially, can lead to severely sub-optimal plans. Modern optimizers typically maintain one-dimensional statistical summaries and make the attribute value independence and join uniformity assumptions for efficiently estimating selectivities. Therefore, selectivity estimation errors in today......’s optimizers are frequently caused by missed correlations between attributes. We present a selectivity estimation approach that does not make the independence assumptions. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution of all...
Grms or graphical representation of model spaces. Vol. I Basics
International Nuclear Information System (INIS)
Duch, W.
1986-01-01
This book presents a novel approach to the many-body problem in quantum chemistry, nuclear shell-theory and solid-state theory. Many-particle model spaces are visualized using graphs, each path of a graph labeling a single basis function or a subspace of functions. Spaces of a very high dimension are represented by small graphs. Model spaces have structure that is reflected in the architecture of the corresponding graphs, that in turn is reflected in the structure of the matrices corresponding to operators acting in these spaces. Insight into this structure leads to formulation of very efficient computer algorithms. Calculation of matrix elements is reduced to comparison of paths in a graph, without ever looking at the functions themselves. Using only very rudimentary mathematical tools graphical rules of matrix element calculation in abelian cases are derived, in particular segmentation rules obtained in the unitary group approached are rederived. The graphs are solutions of Diophantine equations of the type appearing in different branches of applied mathematics. Graphical representation of model spaces should find as many applications as has been found for diagramatical methods in perturbation theory
Experimental Object-Oriented Modelling
DEFF Research Database (Denmark)
Hansen, Klaus Marius
and discuss techniques for handling and representing uncertainty when modelling in experimental system development. These techniques are centred on patterns and styles for handling uncertainty in object-oriented software architectures. Tools We present the Knight tool designed for collaborative modelling...... in experimental system development and discuss its design, implementation, and evaluation. The tool has subsequently been successfully commercialized. In summary, this thesis presents techniques and tools that advance the effectiveness and efficiency of experimental modelling.......This thesis examines object-oriented modelling in experimental system development. Object-oriented modelling aims at representing concepts and phenomena of a problem domain in terms of classes and objects. Experimental system development seeks active experimentation in a system development project...
The graphic model of the spent fuel rod extracting system
International Nuclear Information System (INIS)
Yoon, Jee Sup; Kim, Sung Hyun
1997-01-01
The spent fuel rod extracting system is being developed in KAERI to deal with problems associated with utilization of storage pools at nuclear power plants. This system consists of an equipment system for extracting rods from spent fuel assemblies, a machine controller, and a supervisory controller. The performance of extraction system has been investigated through a series of experiments. Even though the system is designed to automatically perform sequential procedures, several problems have been found such as the gripper stucking to fuel rod caused by misaligned positioning and the socket jamming of impact wrench into the nut, etc. Up to this end the graphical model of the rod extracting system has been made so that possible sequences of operations including error detection and recovery actions are verified by using a graphic simulation before real operations. For the implementation, IGRIP is being used as a multifunctional tool for developing the rod extraction system. IGRIP is not only an excellent visualization tool, but it also highlights modeling virtual machine. (author). 6 refs., 1 tab., 6 figs
An Accurate and Dynamic Computer Graphics Muscle Model
Levine, David Asher
1997-01-01
A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.
OPM Scheme Editor 2: A graphical editor for specifying object-protocol structures
Energy Technology Data Exchange (ETDEWEB)
Chen, I-Min A.; Markowitz, V.M.; Pang, F.; Ben-Shachar, O.
1993-07-01
This document describes an X-window based Schema Editor for the Object-Protocol Model (OPM). OPM is a data model that supports the specification of complex object and protocol classes. objects and protocols are qualified in OPM by attributes that are defined over (associated with) value classes. Connections of object and protocol classes are expressed in OPM via attributes. OPM supports the specification (expansion) of protocols in terms of alternative and sequences of component (sub) protocols. The OPM Schema Editor allows specifying, displaying, modifying, and browsing through OPM schemas. The OPM Schema Editor generates an output file that can be used as input to an OPM schema translation tool that maps OPM schemas into definitions for relational database management systems. The OPM Schema Editor was implemented using C++ and the X11 based Motif toolkit, on Sun SPARCstation under Sun Unix OS 4.1. This document consists of the following parts: (1) A tutorial consisting of seven introductory lessons for the OPM Schema Editor. (2) A reference manual describing all the windows and functions of the OPM Schema Editor. (3) An appendix with an overview of OPM.
Local fit evaluation of structural equation models using graphical criteria.
Thoemmes, Felix; Rosseel, Yves; Textor, Johannes
2018-03-01
Evaluation of model fit is critically important for every structural equation model (SEM), and sophisticated methods have been developed for this task. Among them are the χ² goodness-of-fit test, decomposition of the χ², derived measures like the popular root mean square error of approximation (RMSEA) or comparative fit index (CFI), or inspection of residuals or modification indices. Many of these methods provide a global approach to model fit evaluation: A single index is computed that quantifies the fit of the entire SEM to the data. In contrast, graphical criteria like d-separation or trek-separation allow derivation of implications that can be used for local fit evaluation, an approach that is hardly ever applied. We provide an overview of local fit evaluation from the viewpoint of SEM practitioners. In the presence of model misfit, local fit evaluation can potentially help in pinpointing where the problem with the model lies. For models that do fit the data, local tests can identify the parts of the model that are corroborated by the data. Local tests can also be conducted before a model is fitted at all, and they can be used even for models that are globally underidentified. We discuss appropriate statistical local tests, and provide applied examples. We also present novel software in R that automates this type of local fit evaluation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Concurrent Models for Object Execution
Diertens, Bob
2012-01-01
In previous work we developed a framework of computational models for the concurrent execution of functions on different levels of abstraction. It shows that the traditional sequential execution of function is just a possible implementation of an abstract computational model that allows for the concurrent execution of functions. We use this framework as base for the development of abstract computational models that allow for the concurrent execution of objects.
Development of virtual hands using animation software and graphical modelling
International Nuclear Information System (INIS)
Oliveira, Erick da S.; Junior, Alberico B. de C.
2016-01-01
The numerical dosimetry uses virtual anthropomorphic simulators to represent the human being in computational framework and thus assess the risks associated with exposure to a radioactive source. With the development of computer animation software, the development of these simulators was facilitated using only knowledge of human anatomy to prepare various types of simulators (man, woman, child and baby) in various positions (sitting, standing, running) or part thereof (head, trunk and limbs). These simulators are constructed by loops of handling and due to the versatility of the method, one can create various geometries irradiation was not possible before. In this work, we have built an exhibition of a radiopharmaceutical scenario manipulating radioactive material using animation software and graphical modeling and anatomical database. (author)
Learning transcriptional regulatory relationships using sparse graphical models.
Directory of Open Access Journals (Sweden)
Xiang Zhang
Full Text Available Understanding the organization and function of transcriptional regulatory networks by analyzing high-throughput gene expression profiles is a key problem in computational biology. The challenges in this work are 1 the lack of complete knowledge of the regulatory relationship between the regulators and the associated genes, 2 the potential for spurious associations due to confounding factors, and 3 the number of parameters to learn is usually larger than the number of available microarray experiments. We present a sparse (L1 regularized graphical model to address these challenges. Our model incorporates known transcription factors and introduces hidden variables to represent possible unknown transcription and confounding factors. The expression level of a gene is modeled as a linear combination of the expression levels of known transcription factors and hidden factors. Using gene expression data covering 39,296 oligonucleotide probes from 1109 human liver samples, we demonstrate that our model better predicts out-of-sample data than a model with no hidden variables. We also show that some of the gene sets associated with hidden variables are strongly correlated with Gene Ontology categories. The software including source code is available at http://grnl1.codeplex.com.
Handling geophysical flows: Numerical modelling using Graphical Processing Units
Garcia-Navarro, Pilar; Lacasta, Asier; Juez, Carmelo; Morales-Hernandez, Mario
2016-04-01
Computational tools may help engineers in the assessment of sediment transport during the decision-making processes. The main requirements are that the numerical results have to be accurate and simulation models must be fast. The present work is based on the 2D shallow water equations in combination with the 2D Exner equation [1]. The resulting numerical model accuracy was already discussed in previous work. Regarding the speed of the computation, the Exner equation slows down the already costly 2D shallow water model as the number of variables to solve is increased and the numerical stability is more restrictive. On the other hand, the movement of poorly sorted material over steep areas constitutes a hazardous environmental problem. Computational tools help in the predictions of such landslides [2]. In order to overcome this problem, this work proposes the use of Graphical Processing Units (GPUs) for decreasing significantly the simulation time [3, 4]. The numerical scheme implemented in GPU is based on a finite volume scheme. The mathematical model and the numerical implementation are compared against experimental and field data. In addition, the computational times obtained with the Graphical Hardware technology are compared against Single-Core (sequential) and Multi-Core (parallel) CPU implementations. References [Juez et al.(2014)] Juez, C., Murillo, J., & Garca-Navarro, P. (2014) A 2D weakly-coupled and efficient numerical model for transient shallow flow and movable bed. Advances in Water Resources. 71 93-109. [Juez et al.(2013)] Juez, C., Murillo, J., & Garca-Navarro, P. (2013) . 2D simulation of granular flow over irregular steep slopes using global and local coordinates. Journal of Computational Physics. 225 166-204. [Lacasta et al.(2014)] Lacasta, A., Morales-Hernndez, M., Murillo, J., & Garca-Navarro, P. (2014) An optimized GPU implementation of a 2D free surface simulation model on unstructured meshes Advances in Engineering Software. 78 1-15. [Lacasta
Infants' Cross-modal Transfer from Solid Objects to Their Graphic Representations.
Rose, Susan A.; And Others
1983-01-01
In three studies, 12-month-old infants were familiarized either tactually or visually with objects and were then tested for visual recognition memory using either (1) the familiar and a novel object, (2) colored pictures of the objects, or (3) outline drawings of the objects. (Author/MP)
A Statistical Graphical Model of the California Reservoir System
Taeb, A.; Reager, J. T.; Turmon, M.; Chandrasekaran, V.
2017-11-01
The recent California drought has highlighted the potential vulnerability of the state's water management infrastructure to multiyear dry intervals. Due to the high complexity of the network, dynamic storage changes in California reservoirs on a state-wide scale have previously been difficult to model using either traditional statistical or physical approaches. Indeed, although there is a significant line of research on exploring models for single (or a small number of) reservoirs, these approaches are not amenable to a system-wide modeling of the California reservoir network due to the spatial and hydrological heterogeneities of the system. In this work, we develop a state-wide statistical graphical model to characterize the dependencies among a collection of 55 major California reservoirs across the state; this model is defined with respect to a graph in which the nodes index reservoirs and the edges specify the relationships or dependencies between reservoirs. We obtain and validate this model in a data-driven manner based on reservoir volumes over the period 2003-2016. A key feature of our framework is a quantification of the effects of external phenomena that influence the entire reservoir network. We further characterize the degree to which physical factors (e.g., state-wide Palmer Drought Severity Index (PDSI), average temperature, snow pack) and economic factors (e.g., consumer price index, number of agricultural workers) explain these external influences. As a consequence of this analysis, we obtain a system-wide health diagnosis of the reservoir network as a function of PDSI.
POMP - Pervasive Object Model Project
DEFF Research Database (Denmark)
Schougaard, Kari Rye; Schultz, Ulrik Pagh
The focus on mobile devices is continuously increasing, and improved device connectivity enables the construction of pervasive computing systems composed of heterogeneous collections of devices. Users who employ different devices throughout their daily activities naturally expect their applications...... computing environment. This system, named POM (Pervasive Object Model), supports applications split into coarse-grained, strongly mobile units that communicate using method invocations through proxies. We are currently investigating efficient execution of mobile applications, scalability to suit...
Graphical User Interface for Simulink Integrated Performance Analysis Model
Durham, R. Caitlyn
2009-01-01
The J-2X Engine (built by Pratt & Whitney Rocketdyne,) in the Upper Stage of the Ares I Crew Launch Vehicle, will only start within a certain range of temperature and pressure for Liquid Hydrogen and Liquid Oxygen propellants. The purpose of the Simulink Integrated Performance Analysis Model is to verify that in all reasonable conditions the temperature and pressure of the propellants are within the required J-2X engine start boxes. In order to run the simulation, test variables must be entered at all reasonable values of parameters such as heat leak and mass flow rate. To make this testing process as efficient as possible in order to save the maximum amount of time and money, and to show that the J-2X engine will start when it is required to do so, a graphical user interface (GUI) was created to allow the input of values to be used as parameters in the Simulink Model, without opening or altering the contents of the model. The GUI must allow for test data to come from Microsoft Excel files, allow those values to be edited before testing, place those values into the Simulink Model, and get the output from the Simulink Model. The GUI was built using MATLAB, and will run the Simulink simulation when the Simulate option is activated. After running the simulation, the GUI will construct a new Microsoft Excel file, as well as a MATLAB matrix file, using the output values for each test of the simulation so that they may graphed and compared to other values.
A Gaussian graphical model approach to climate networks
Energy Technology Data Exchange (ETDEWEB)
Zerenner, Tanja, E-mail: tanjaz@uni-bonn.de [Meteorological Institute, University of Bonn, Auf dem Hügel 20, 53121 Bonn (Germany); Friederichs, Petra; Hense, Andreas [Meteorological Institute, University of Bonn, Auf dem Hügel 20, 53121 Bonn (Germany); Interdisciplinary Center for Complex Systems, University of Bonn, Brühler Straße 7, 53119 Bonn (Germany); Lehnertz, Klaus [Department of Epileptology, University of Bonn, Sigmund-Freud-Straße 25, 53105 Bonn (Germany); Helmholtz Institute for Radiation and Nuclear Physics, University of Bonn, Nussallee 14-16, 53115 Bonn (Germany); Interdisciplinary Center for Complex Systems, University of Bonn, Brühler Straße 7, 53119 Bonn (Germany)
2014-06-15
Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately.
A Gaussian graphical model approach to climate networks
International Nuclear Information System (INIS)
Zerenner, Tanja; Friederichs, Petra; Hense, Andreas; Lehnertz, Klaus
2014-01-01
Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately
Gaussian graphical modeling reveals specific lipid correlations in glioblastoma cells
Mueller, Nikola S.; Krumsiek, Jan; Theis, Fabian J.; Böhm, Christian; Meyer-Bäse, Anke
2011-06-01
Advances in high-throughput measurements of biological specimens necessitate the development of biologically driven computational techniques. To understand the molecular level of many human diseases, such as cancer, lipid quantifications have been shown to offer an excellent opportunity to reveal disease-specific regulations. The data analysis of the cell lipidome, however, remains a challenging task and cannot be accomplished solely based on intuitive reasoning. We have developed a method to identify a lipid correlation network which is entirely disease-specific. A powerful method to correlate experimentally measured lipid levels across the various samples is a Gaussian Graphical Model (GGM), which is based on partial correlation coefficients. In contrast to regular Pearson correlations, partial correlations aim to identify only direct correlations while eliminating indirect associations. Conventional GGM calculations on the entire dataset can, however, not provide information on whether a correlation is truly disease-specific with respect to the disease samples and not a correlation of control samples. Thus, we implemented a novel differential GGM approach unraveling only the disease-specific correlations, and applied it to the lipidome of immortal Glioblastoma tumor cells. A large set of lipid species were measured by mass spectrometry in order to evaluate lipid remodeling as a result to a combination of perturbation of cells inducing programmed cell death, while the other perturbations served solely as biological controls. With the differential GGM, we were able to reveal Glioblastoma-specific lipid correlations to advance biomedical research on novel gene therapies.
Hoss, Frauke; London, Alex John
2016-12-01
This paper presents a proof of concept for a graphical models approach to assessing the moral coherence and moral robustness of systems of social interactions. "Moral coherence" refers to the degree to which the rights and duties of agents within a system are effectively respected when agents in the system comply with the rights and duties that are recognized as in force for the relevant context of interaction. "Moral robustness" refers to the degree to which a system of social interaction is configured to ensure that the interests of agents are effectively respected even in the face of noncompliance. Using the case of conscientious objection of pharmacists to filling prescriptions for emergency contraception as an example, we illustrate how a graphical models approach can help stakeholders identify structural weaknesses in systems of social interaction and evaluate the relative merits of alternate organizational structures. By illustrating the merits of a graphical models approach we hope to spur further developments in this area.
A Graphical Proof of the Positive Entropy Change in Heat Transfer between Two Objects
Kiatgamolchai, Somchai
2015-01-01
It is well known that heat transfer between two objects results in a positive change in the total entropy of the two-object system. The second law of thermodynamics states that the entropy change of a naturally irreversible process is positive. In other words, if the entropy change of any process is positive, it can be inferred that such a process…
BDgraph: An R Package for Bayesian Structure Learning in Graphical Models
Mohammadi, A.; Wit, E.C.
2017-01-01
Graphical models provide powerful tools to uncover complicated patterns in multivariate data and are commonly used in Bayesian statistics and machine learning. In this paper, we introduce an R package BDgraph which performs Bayesian structure learning for general undirected graphical models with
Ferromanganese Furnace Modelling Using Object-Oriented Principles
Energy Technology Data Exchange (ETDEWEB)
Wasboe, S.O.
1996-12-31
This doctoral thesis defines an object-oriented framework for aiding unit process modelling and applies it to model high-carbon ferromanganese furnaces. A framework is proposed for aiding modelling of the internal topology and the phenomena taking place inside unit processes. Complex unit processes may consist of a number of zones where different phenomena take place. A topology is therefore defined for the unit process itself, which shows the relations between the zones. Inside each zone there is a set of chemical species and phenomena, such as reactions, phase transitions, heat transfer etc. A formalized graphical methodology is developed as a tool for modelling these zones and their interaction. The symbols defined in the graphical framework are associated with objects and classes. The rules for linking the objects are described using OMT (Object Modeling Technique) diagrams and formal language formulations. The basic classes that are defined are implemented using the C++ programming language. The ferromanganese process is a complex unit process. A general description of the process equipment is given, and a detailed discussion of the process itself and a system theoretical overview of it. The object-oriented framework is then used to develop a dynamic model based on mass and energy balances. The model is validated by measurements from an industrial furnace. 101 refs., 119 figs., 20 tabs.
Rule Based Simulation of Simple Object Grasping on a Graphical Manikin
VERRIEST, JP
2009-01-01
The objective of this work is to identify rules for the simulation of grasping movements from the analysis of a data base of experimentally recorded movements. A data base of grasping movements of simple objects (cube, cylinder and sphere) ranging from 40 to 80 mm, with different grasping modes (precision and force) has been constituted through an experiment with seven (right handed) volunteers. The movements have been recorded by means of a motion capture system (VICON®) tracking surface mar...
Shimizu, Y; Saida, S; Shimura, H
1993-01-01
Haptic recognition of familiar objects by the early blind, the late blind, and the sighted was investigated with two-dimensional (2-D) and three-dimensional (3-D) stimuli produced by small tactor-pins. The 2-D stimulus was an outline of an object that was depicted by raising tactor-pins to 1.5 mm. The 3-D stimulus was a relief that was produced by raising the tactors up to 10 mm, corresponding to the height of the object. Mean recognition times for correct answers to the 3-D stimuli were faster than those for the 2-D stimuli, in all three subject groups. No statistically significant differences in percentage of correct responses between the 2-D and the 3-D stimuli were found for the late-blind and sighted groups, but the early-blind group demonstrated a significant difference. In addition, the haptic legibility for the quality of depiction of the object, without regard to whether or not the stimulus was understood, was measured. The haptic legibility of the 3-D stimuli was significantly higher than that of the 2-D stimuli for all the groups. These results suggest that 3-D presentation seems to promise a way to overcome the limitations of 2-D graphic display.
Graphics-based intelligent search and abstracting using Data Modeling
Jaenisch, Holger M.; Handley, James W.; Case, Carl T.; Songy, Claude G.
2002-11-01
This paper presents an autonomous text and context-mining algorithm that converts text documents into point clouds for visual search cues. This algorithm is applied to the task of data-mining a scriptural database comprised of the Old and New Testaments from the Bible and the Book of Mormon, Doctrine and Covenants, and the Pearl of Great Price. Results are generated which graphically show the scripture that represents the average concept of the database and the mining of the documents down to the verse level.
Stochastic Analysis of a Queue Length Model Using a Graphics Processing Unit
Czech Academy of Sciences Publication Activity Database
Přikryl, Jan; Kocijan, J.
2012-01-01
Roč. 5, č. 2 (2012), s. 55-62 ISSN 1802-971X R&D Projects: GA MŠk(CZ) MEB091015 Institutional support: RVO:67985556 Keywords : graphics processing unit * GPU * Monte Carlo simulation * computer simulation * modeling Subject RIV: BC - Control Systems Theory http://library.utia.cas.cz/separaty/2012/AS/prikryl-stochastic analysis of a queue length model using a graphics processing unit.pdf
Unsupervised Modeling of Objects and Their Hierarchical Contextual Interactions
Directory of Open Access Journals (Sweden)
Tsuhan Chen
2009-01-01
Full Text Available A successful representation of objects in literature is as a collection of patches, or parts, with a certain appearance and position. The relative locations of the different parts of an object are constrained by the geometry of the object. Going beyond a single object, consider a collection of images of a particular scene category containing multiple (recurring objects. The parts belonging to different objects are not constrained by such a geometry. However, the objects themselves, arguably due to their semantic relationships, demonstrate a pattern in their relative locations. Hence, analyzing the interactions among the parts across the collection of images can allow for extraction of the foreground objects, and analyzing the interactions among these objects can allow for a semantically meaningful grouping of these objects, which characterizes the entire scene. These groupings are typically hierarchical. We introduce hierarchical semantics of objects (hSO that captures this hierarchical grouping. We propose an approach for the unsupervised learning of the hSO from a collection of images of a particular scene. We also demonstrate the use of the hSO in providing context for enhanced object localization in the presence of significant occlusions, and show its superior performance over a fully connected graphical model for the same task.
Object feature extraction and recognition model
International Nuclear Information System (INIS)
Wan Min; Xiang Rujian; Wan Yongxing
2001-01-01
The characteristics of objects, especially flying objects, are analyzed, which include characteristics of spectrum, image and motion. Feature extraction is also achieved. To improve the speed of object recognition, a feature database is used to simplify the data in the source database. The feature vs. object relationship maps are stored in the feature database. An object recognition model based on the feature database is presented, and the way to achieve object recognition is also explained
On a Numerical and Graphical Technique for Evaluating some Models Involving Rational Expectations
DEFF Research Database (Denmark)
Johansen, Søren; Swensen, Anders Rygh
Campbell and Shiller (1987) proposed a graphical technique for the present value model which consists of plotting the spread and theoretical spread as calculated from the cointegrated vector autoregressive model. We extend these techniques to a number of rational expectation models and give...
On a numerical and graphical technique for evaluating some models involving rational expectations
DEFF Research Database (Denmark)
Johansen, Søren; Swensen, Anders Rygh
Campbell and Shiller (1987) proposed a graphical technique for the present value model which consists of plotting the spread and theoretical spread as calculated from the cointegrated vector autoregressive model. We extend these techniques to a number of rational expectation models and give...
Object Oriented Modeling Of Social Networks
Zeggelink, Evelien P.H.; Oosten, Reinier van; Stokman, Frans N.
1996-01-01
The aim of this paper is to explain principles of object oriented modeling in the scope of modeling dynamic social networks. As such, the approach of object oriented modeling is advocated within the field of organizational research that focuses on networks. We provide a brief introduction into the
Lunar-Forming Giant Impact Model Utilizing Modern Graphics ...
Indian Academy of Sciences (India)
impact theories are being questioned due to their inability to find condi- .... energy transfer. Each impactor is composed of two types of elements: silicate mate- rial and iron (Canup 2012). Silicate and iron elements are allowed to have .... Most objects in our solar system reside in the ecliptic plane and orbit the Sun in the.
Graphics metafile interface to ARAC emergency response models for remote workstation study
International Nuclear Information System (INIS)
Lawver, B.S.
1985-01-01
The Department of Energy's Atmospheric Response Advisory Capability models are executed on computers at a central computer center with the output distributed to accident advisors in the field. The output of these atmospheric diffusion models are generated as contoured isopleths of concentrations. When these isopleths are overlayed with local geography, they become a useful tool to the accident site advisor. ARAC has developed a workstation that is located at potential accident sites. The workstation allows the accident advisor to view color plots of the model results, scale those plots and print black and white hardcopy of the model results. The graphics metafile, also known as Virtual Device Metafile (VDM) allows the models to generate a single device independent output file that is partitioned into geography, isoopleths and labeling information. The metafile is a very compact data storage technique that is output device independent. The metafile frees the model from either generating output for all known graphic devices or requiring the model to be rerun for additional graphic devices. With the partitioned metafile ARAC can transmit to the remote workstation the isopleths and labeling for each model. The geography database may not change and can be transmitted only when needed. This paper describes the important features of the remote workstation and how these features are supported by the device independent graphics metafile
A Graphical Model for Risk Analysis and Management
Wang, Xun; Williams, Mary-Anne
Risk analysis and management are important capabilities in intelligent information and knowledge systems. We present a new approach using directed graph based models for risk analysis and management. Our modelling approach is inspired by and builds on the two level approach of the Transferable Belief Model. The credal level for risk analysis and model construction uses beliefs in causal inference relations among the variables within a domain and a pignistic(betting) level for the decision making. The risk model at the credal level can be transformed into a probabilistic model through a pignistic transformation function. This paper focuses on model construction at the credal level. Our modelling approach captures expert knowledge in a formal and iterative fashion based on the Open World Assumption(OWA) in contrast to Bayesian Network based approaches for managing uncertainty associated with risks which assume all the domain knowledge and data have been captured before hand. As a result, our approach does not require complete knowledges and is well suited for modelling risk in dynamic changing environments where information and knowledge is gathered over time as decisions need to be taken. Its performance is related to the quality of the knowledge at hand at any given time.
On a Graphical Technique for Evaluating Some Rational Expectations Models
DEFF Research Database (Denmark)
Johansen, Søren; Swensen, Anders R.
2011-01-01
. In addition to getting a visual impression of the fit of the model, the purpose is to see if the two spreads are nevertheless similar as measured by correlation, variance ratio, and noise ratio. We extend these techniques to a number of rational expectation models and give a general definition of spread...
Graphical means for inspecting qualitative models of system behaviour
Bouwer, A.; Bredeweg, B.
2010-01-01
This article presents the design and evaluation of a tool for inspecting conceptual models of system behaviour. The basis for this research is the Garp framework for qualitative simulation. This framework includes modelling primitives, such as entities, quantities and causal dependencies, which are
Context-specific graphical models for discret longitudinal data
DEFF Research Database (Denmark)
Edwards, David; Anantharama Ankinakatte, Smitha
2015-01-01
Ron et al. (1998) introduced a rich family of models for discrete longitudinal data called acyclic probabilistic finite automata. These may be represented as directed graphs that embody context-specific conditional independence relations. Here, the approach is developed from a statistical...... perspective. It is shown here that likelihood ratio tests may be constructed using standard contingency table methods, a model selection procedure that minimizes a penalized likelihood criterion is described, and a way to extend the models to incorporate covariates is proposed. The methods are applied...
The Composite OLAP-Object Data Model
Energy Technology Data Exchange (ETDEWEB)
Pourabbas, Elaheh; Shoshani, Arie
2005-12-07
In this paper, we define an OLAP-Object model that combines the main characteristics of OLAP and Object data models in order to achieve their functionalities in a common framework. We classify three different object classes: primitive, regular and composite. Then, we define a query language which uses the path concept in order to facilitate data navigation and data manipulation. The main feature of the proposed language is an anchor. It allows us to fix dynamically an object class (primitive, regular or composite) along the paths over the OLAP-Object data model for expressing queries. The queries can be formulated on objects, composite objects and combination of both. The power of the proposed query language is investigated through multiple query examples. The semantic of different clauses and syntax of the proposed language are investigated.
A Local Poisson Graphical Model for inferring networks from sequencing data.
Allen, Genevera I; Liu, Zhandong
2013-09-01
Gaussian graphical models, a class of undirected graphs or Markov Networks, are often used to infer gene networks based on microarray expression data. Many scientists, however, have begun using high-throughput sequencing technologies such as RNA-sequencing or next generation sequencing to measure gene expression. As the resulting data consists of counts of sequencing reads for each gene, Gaussian graphical models are not optimal for this discrete data. In this paper, we propose a novel method for inferring gene networks from sequencing data: the Local Poisson Graphical Model. Our model assumes a Local Markov property where each variable conditional on all other variables is Poisson distributed. We develop a neighborhood selection algorithm to fit our model locally by performing a series of l1 penalized Poisson, or log-linear, regressions. This yields a fast parallel algorithm for estimating networks from next generation sequencing data. In simulations, we illustrate the effectiveness of our methods for recovering network structure from count data. A case study on breast cancer microRNAs (miRNAs), a novel application of graphical models, finds known regulators of breast cancer genes and discovers novel miRNA clusters and hubs that are targets for future research.
Heckbert, Paul S
1994-01-01
Graphics Gems IV contains practical techniques for 2D and 3D modeling, animation, rendering, and image processing. The book presents articles on polygons and polyhedral; a mix of formulas, optimized algorithms, and tutorial information on the geometry of 2D, 3D, and n-D space; transformations; and parametric curves and surfaces. The text also includes articles on ray tracing; shading 3D models; and frame buffer techniques. Articles on image processing; algorithms for graphical layout; basic interpolation methods; and subroutine libraries for vector and matrix algebra are also demonstrated. Com
1990-01-01
A mathematician, David R. Hedgley, Jr. developed a computer program that considers whether a line in a graphic model of a three-dimensional object should or should not be visible. Known as the Hidden Line Computer Code, the program automatically removes superfluous lines and displays an object from a specific viewpoint, just as the human eye would see it. An example of how one company uses the program is the experience of Birdair which specializes in production of fabric skylights and stadium covers. The fabric called SHEERFILL is a Teflon coated fiberglass material developed in cooperation with DuPont Company. SHEERFILL glazed structures are either tension structures or air-supported tension structures. Both are formed by patterned fabric sheets supported by a steel or aluminum frame or cable network. Birdair uses the Hidden Line Computer Code, to illustrate a prospective structure to an architect or owner. The program generates a three- dimensional perspective with the hidden lines removed. This program is still used by Birdair and continues to be commercially available to the public.
Lunar-Forming Giant Impact Model Utilizing Modern Graphics ...
Indian Academy of Sciences (India)
2016-01-27
Jan 27, 2016 ... Recent giant impact models focus on producing a circumplanetary disk of the proper composition around the Earth and defer to earlier works for the accretion of this disk into the Moon. The discontinuity between creating the circumplanetary disk and accretion of the Moon is unnatural and lacks simplicity.
A graphical vector autoregressive modelling approach to the analysis of electronic diary data
Directory of Open Access Journals (Sweden)
Zipfel Stephan
2010-04-01
Full Text Available Abstract Background In recent years, electronic diaries are increasingly used in medical research and practice to investigate patients' processes and fluctuations in symptoms over time. To model dynamic dependence structures and feedback mechanisms between symptom-relevant variables, a multivariate time series method has to be applied. Methods We propose to analyse the temporal interrelationships among the variables by a structural modelling approach based on graphical vector autoregressive (VAR models. We give a comprehensive description of the underlying concepts and explain how the dependence structure can be recovered from electronic diary data by a search over suitable constrained (graphical VAR models. Results The graphical VAR approach is applied to the electronic diary data of 35 obese patients with and without binge eating disorder (BED. The dynamic relationships for the two subgroups between eating behaviour, depression, anxiety and eating control are visualized in two path diagrams. Results show that the two subgroups of obese patients with and without BED are distinguishable by the temporal patterns which influence their respective eating behaviours. Conclusion The use of the graphical VAR approach for the analysis of electronic diary data leads to a deeper insight into patient's dynamics and dependence structures. An increasing use of this modelling approach could lead to a better understanding of complex psychological and physiological mechanisms in different areas of medical care and research.
Mathematical models for gas transportation simulation and interactive graphics
Energy Technology Data Exchange (ETDEWEB)
Giannini, L.; Marinetti, A.
1988-01-01
This paper describes a simulation system particularly suitable for a wide range of applications: 1. short, medium and long term planning: 2. pipeline design; 3. modelling research; 4. dispatching planning; 5. training. The system may furthermore be used for the same purpose by both skilled and non-skilled personnel, operating in different ways. In view of this variety of outlooks regarding the system, an integrated software package was found to be necessary, in order to manage multiple simulations of different networks for varying applications. The mathematical model, which forms the basis of the system, uses the complete formulation of gasdynamics 1-D equations: the continuity equation, the momentum conservation equation and the energy conservation equation. These three equations form a system of quasi-linear partial differential equations which are resolved numerically. Multi-windowing techniques are used together with cooperative process techniques (organized as detached processes or hierarchical trees), in order to reduce response times.
Model Verification and Validation Using Graphical Information Systems Tools
2013-07-31
coastal ocean sufficiently to have a complete picture of the flow. The analysis will thus consist of comparing these incomplete pictures of the current...50 cm. This would suggest that tidal flats would exist at synoptic scales but not daily because there are expanses of the lagoon that are < 50 cm...historical daily data from the correct time of year but not from the correct day. This indicates that the model flow is generally correct at synoptic
Counterfactual Graphical Models for Longitudinal Mediation Analysis with Unobserved Confounding
Shpitser, Ilya
2012-01-01
Questions concerning mediated causal effects are of great interest in psychology, cognitive science, medicine, social science, public health, and many other disciplines. For instance, about 60% of recent papers published in leading journals in social psychology contain at least one mediation test (Rucker, Preacher, Tormala, & Petty, 2011). Standard parametric approaches to mediation analysis employ regression models, and either the "difference method" (Judd & Kenny, 1981), more common...
A Practical Probabilistic Graphical Modeling Tool for Weighing ...
Past weight-of-evidence frameworks for adverse ecological effects have provided soft-scoring procedures for judgments based on the quality and measured attributes of evidence. Here, we provide a flexible probabilistic structure for weighing and integrating lines of evidence for ecological risk determinations. Probabilistic approaches can provide both a quantitative weighing of lines of evidence and methods for evaluating risk and uncertainty. The current modeling structure wasdeveloped for propagating uncertainties in measured endpoints and their influence on the plausibility of adverse effects. To illustrate the approach, we apply the model framework to the sediment quality triad using example lines of evidence for sediment chemistry measurements, bioassay results, and in situ infauna diversity of benthic communities using a simplified hypothetical case study. We then combine the three lines evidence and evaluate sensitivity to the input parameters, and show how uncertainties are propagated and how additional information can be incorporated to rapidly update the probability of impacts. The developed network model can be expanded to accommodate additional lines of evidence, variables and states of importance, and different types of uncertainties in the lines of evidence including spatial and temporal as well as measurement errors. We provide a flexible Bayesian network structure for weighing and integrating lines of evidence for ecological risk determinations
Copula Gaussian graphical models with penalized ascent Monte Carlo EM algorithm
Abegaz, Fentaw; Wit, Ernst
2015-01-01
Typical data that arise from surveys, experiments, and observational studies include continuous and discrete variables. In this article, we study the interdependence among a mixed (continuous, count, ordered categorical, and binary) set of variables via graphical models. We propose an (1)-penalized
Scaling-up spatially-explicit ecological models using graphics processors
Koppel, Johan van de; Gupta, Rohit; Vuik, Cornelis
2011-01-01
How the properties of ecosystems relate to spatial scale is a prominent topic in current ecosystem research. Despite this, spatially explicit models typically include only a limited range of spatial scales, mostly because of computing limitations. Here, we describe the use of graphics processors to
Oishi, Makoto; Fukuda, Masafumi; Hiraishi, Tetsuya; Yajima, Naoki; Sato, Yosuke; Fujii, Yukihiko
2012-09-01
The purpose of this paper is to report on the authors' advanced presurgical interactive virtual simulation technique using a 3D computer graphics model for microvascular decompression (MVD) surgery. The authors performed interactive virtual simulation prior to surgery in 26 patients with trigeminal neuralgia or hemifacial spasm. The 3D computer graphics models for interactive virtual simulation were composed of the brainstem, cerebellum, cranial nerves, vessels, and skull individually created by the image analysis, including segmentation, surface rendering, and data fusion for data collected by 3-T MRI and 64-row multidetector CT systems. Interactive virtual simulation was performed by employing novel computer-aided design software with manipulation of a haptic device to imitate the surgical procedures of bone drilling and retraction of the cerebellum. The findings were compared with intraoperative findings. In all patients, interactive virtual simulation provided detailed and realistic surgical perspectives, of sufficient quality, representing the lateral suboccipital route. The causes of trigeminal neuralgia or hemifacial spasm determined by observing 3D computer graphics models were concordant with those identified intraoperatively in 25 (96%) of 26 patients, which was a significantly higher rate than the 73% concordance rate (concordance in 19 of 26 patients) obtained by review of 2D images only (p computer graphics model provided a realistic environment for performing virtual simulations prior to MVD surgery and enabled us to ascertain complex microsurgical anatomy.
Learning models of activities involving interacting objects
DEFF Research Database (Denmark)
Manfredotti, Cristina; Pedersen, Kim Steenstrup; Hamilton, Howard J.
2013-01-01
We propose the LEMAIO multi-layer framework, which makes use of hierarchical abstraction to learn models for activities involving multiple interacting objects from time sequences of data concerning the individual objects. Experiments in the sea navigation domain yielded learned models that were...... then successfully applied to activity recognition, activity simulation and multi-target tracking. Our method compares favourably with respect to previously reported results using Hidden Markov Models and Relational Particle Filtering....
Scaling-up spatially-explicit ecological models using graphics processors
Koppel, Johan van de; Gupta, Rohit; Vuik, Cornelis
2011-01-01
How the properties of ecosystems relate to spatial scale is a prominent topic in current ecosystem research. Despite this, spatially explicit models typically include only a limited range of spatial scales, mostly because of computing limitations. Here, we describe the use of graphics processors to efficiently solve spatially explicit ecological models at large spatial scale using the CUDA language extension. We explain this technique by implementing three classical models of spatial self-org...
3D Design and Modeling of Smart Cities from a Computer Graphics Perspective
Aliaga, Daniel G.
2012-01-01
Modeling cities, and urban spaces in general, is a daring task for computer graphics, computer vision, and visualization. Understanding, describing, and modeling the geometry and behavior of cities are significant challenges that ultimately benefit urban planning and simulation, mapping and visualization, emergency response, and entertainment. In this paper, we have collected and organized research which addresses this multidisciplinary challenge. In particular, we divide research in modeling...
Energy Technology Data Exchange (ETDEWEB)
Buck, C.D.; Coy, M.E.
1983-01-01
This paper describes the Financial Graphics System developed at Sandia National Laboratories (Sandia), the operation of the system, and the future plans for financial graphics at Sandia. Design objectives for the system were to: provide a means for producing graphs on demand by the Comptroller's staff, which would decrease lapsed time required and reduce direct programming effort, improve Sandia's capabilities for using graphics as a tool for management reporting through evaluation and testing of graphic equipment and software, provide a graphic support system which would be compatible with other company financial reporting tools, and integrate financial statistical analysis with the graphics system.
Moving objects management models, techniques and applications
Meng, Xiaofeng; Xu, Jiajie
2014-01-01
This book describes the topics of moving objects modeling and location tracking, indexing and querying, clustering, location uncertainty, traffic aware navigation and privacy issues as well as the application to intelligent transportation systems.
DEFF Research Database (Denmark)
Spataru, Sergiu; Sera, Dezso; Kerekes, Tamas
2012-01-01
This paper presents a set of laboratory tools aimed to support students with various backgrounds (no programming) to understand photovoltaic array modelling and characterization techniques. A graphical user interface (GUI) has been developed in Matlab, for modelling PV arrays and characterizing t...... the effect of different types of parameters and operating conditions, on the current-voltage and power-voltage curves. The GUI is supported by experimental investigation and validation on PV module level, with the help of an indoor flash solar simulator.......This paper presents a set of laboratory tools aimed to support students with various backgrounds (no programming) to understand photovoltaic array modelling and characterization techniques. A graphical user interface (GUI) has been developed in Matlab, for modelling PV arrays and characterizing...
Modeling And Simulation As The Basis For Hybridity In The Graphic Discipline Learning/Teaching Area
Directory of Open Access Journals (Sweden)
Jana Žiljak Vujić
2009-01-01
Full Text Available Only some fifteen years have passed since the scientific graphics discipline was established. In the transition period from the College of Graphics to «Integrated Graphic Technology Studies» to the contemporary Faculty of Graphics Arts with the University in Zagreb, three main periods of development can be noted: digital printing, computer prepress and automatic procedures in postpress packaging production. Computer technology has enabled a change in the methodology of teaching graphics technology and studying it on the level of secondary and higher education. The task has been set to create tools for simulating printing processes in order to master the program through a hybrid system consisting of methods that are separate in relation to one another: learning with the help of digital models and checking in the actual real system. We are setting a hybrid project for teaching because the overall acquired knowledge is the result of completely different methods. The first method is on the free programs level functioning without consequences. Everything remains as a record in the knowledge database that can be analyzed, statistically processed and repeated with new parameter values of the system being researched. The second method uses the actual real system where the results are in proving the value of new knowledge and this is something that encourages and stimulates new cycles of hybrid behavior in mastering programs. This is the area where individual learning incurs. The hybrid method allows the possibility of studying actual situations on a computer model, proving it on an actual real model and entering the area of learning envisaging future development.
New Graphical Model for Computing Optimistic Decisions in Possibility Theory Framework
Ismahane Zeddigha; Salem Benferhat; Faiza Khellaf
2016-01-01
This paper first proposes a new graphical model for decision making under uncertainty based on min-based possibilistic networks. A decision problem under uncertainty is described by means of two distinct min-based possibilistic networks: the first one expresses agent's knowledge while the second one encodes agent's preferences representing a qualitative utility. We then propose an efficient algorithm for computing optimistic optimal decisions using our new model for representing possibilistic...
International Nuclear Information System (INIS)
Paćko, P; Bielak, T; Staszewski, W J; Uhl, T; Spencer, A B; Worden, K
2012-01-01
This paper demonstrates new parallel computation technology and an implementation for Lamb wave propagation modelling in complex structures. A graphical processing unit (GPU) and computer unified device architecture (CUDA), available in low-cost graphical cards in standard PCs, are used for Lamb wave propagation numerical simulations. The local interaction simulation approach (LISA) wave propagation algorithm has been implemented as an example. Other algorithms suitable for parallel discretization can also be used in practice. The method is illustrated using examples related to damage detection. The results demonstrate good accuracy and effective computational performance of very large models. The wave propagation modelling presented in the paper can be used in many practical applications of science and engineering. (paper)
Cadastral Modeling:Grasping the objectives
Stubkjær, Erik
2005-01-01
Modeling is a term that refers to a variety of efforts, including data and process modeling. The domain to be modeled may be a department, an organization, or even an industrial sector. E-business presupposes the modeling of an industrial sector, a substantial task. Cadastral modeling compares to the modeling of an industrial sector, as it aims at rendering the basic concepts that relate to the domain of real estate and the pertinent human activities. The palpable objects are pieces of land a...
Building Mathematical Models Of Solid Objects
Randall, Donald P.; Jones, Kennie H.; Von Ofenheim, William H.; Gates, Raymond L.; Matthews, Christine G.
1989-01-01
Solid Modeling Program (SMP) version 2.0 provides capability to model complex solid objects mathematically through aggregation of geometric primitives (parts). System provides designer with basic set of primitive parts and capability to define new primitives. Six primitives included in present version: boxes, cones, spheres, paraboloids, tori, and trusses. Written in VAX/VMS FORTRAN 77.
Extending Model Checking To Object Process Validation
van Rein, H.
2002-01-01
Object-oriented techniques allow the gathering and modelling of system requirements in terms of an application area. The expression of data and process models at that level is a great asset in communication with non-technical people in that area, but it does not necessarily lead to consistent
A general model of learning design objects
Directory of Open Access Journals (Sweden)
Azeddine Chikh
2014-01-01
Full Text Available Previous research on the development of learning objects has targeted either learners, as consumers of these objects, or instructors, as designers who reuse these objects in building new online courses. There is currently an urgent need for the sharing and reuse of both theoretical knowledge (literature reviews and practical knowledge (best practice in learning design. The primary aim of this paper is to develop a strategy for constructing a more powerful set of learning objects targeted at supporting instructors in designing their curricula. A key challenge in this work is the definition of a new class of learning design objects that combine two types of knowledge: (1 reusable knowledge, consisting of theoretical and practical information on education design, and (2 knowledge of reuse, which is necessary to describe the reusable knowledge using an extended learning object metadata language. In addition, we introduce a general model of learning design object repositories based on the Unified Modeling Language, and a learning design support framework is proposed based on the repository model. Finally, a first prototype is developed to provide a subjective evaluation of the new framework.
Looye, G.; Hecker, S.; Kier, T.; Reschke, C.
2005-01-01
In this paper a model component library for developing multi-disciplinary aircraft flight dynamics models is presented, named FlightDynLib. This library is based on the object-oriented modelling language Modelica that has been designed for modelling of large scale multi-physics systems. The flight dynamics library allows for graphical construction of comlex rigid as well as flexible aircraft dynamics models and is fully compatible with other available libraries for electronics, thermodynamics...
The Gaussian Graphical Model in Cross-Sectional and Time-Series Data.
Epskamp, Sacha; Waldorp, Lourens J; Mõttus, René; Borsboom, Denny
2018-04-16
We discuss the Gaussian graphical model (GGM; an undirected network of partial correlation coefficients) and detail its utility as an exploratory data analysis tool. The GGM shows which variables predict one-another, allows for sparse modeling of covariance structures, and may highlight potential causal relationships between observed variables. We describe the utility in three kinds of psychological data sets: data sets in which consecutive cases are assumed independent (e.g., cross-sectional data), temporally ordered data sets (e.g., n = 1 time series), and a mixture of the 2 (e.g., n > 1 time series). In time-series analysis, the GGM can be used to model the residual structure of a vector-autoregression analysis (VAR), also termed graphical VAR. Two network models can then be obtained: a temporal network and a contemporaneous network. When analyzing data from multiple subjects, a GGM can also be formed on the covariance structure of stationary means-the between-subjects network. We discuss the interpretation of these models and propose estimation methods to obtain these networks, which we implement in the R packages graphicalVAR and mlVAR. The methods are showcased in two empirical examples, and simulation studies on these methods are included in the supplementary materials.
Computer graphic study on models of the molybdenum cofactor of xanthine oxidase
Folkers, Gerd; Krug, Michael; Trumpp, Susanne
1987-04-01
Within the scope of our molecular modeling studies on xanthine oxidase (XOD) inhibition by purine analogs we were interested to build up a three-dimensional model of the molybdenum active site. Spectroscopic data indicated that a Mo (VI)atom which is coordinated to sulfur, oxygen and/or nitrogen is clearly involved in substrate binding. In the present study, those data and X-ray crystallography data were used to reconstruct molybdenum-organic complexes from models proposed in the literature. The computer graphic-assisted modeling and evaluation of the model complexes show that the description of the molybdenum center needs further refinement.
GRAPHICAL USER INTERFACE WITH APPLICATIONS IN SUSCEPTIBLE-INFECTIOUS-SUSCEPTIBLE MODELS.
Ilea, M; Turnea, M; Arotăriţei, D; Rotariu, Mariana; Popescu, Marilena
2015-01-01
Practical significance of understanding the dynamics and evolution of infectious diseases increases continuously in contemporary world. The mathematical study of the dynamics of infectious diseases has a long history. By incorporating statistical methods and computer-based simulations in dynamic epidemiological models, it could be possible for modeling methods and theoretical analyses to be more realistic and reliable, allowing a more detailed understanding of the rules governing epidemic spreading. To provide the basis for a disease transmission, the population of a region is often divided into various compartments, and the model governing their relation is called the compartmental model. To present all of the information available, a graphical user interface provides icons and visual indicators. The graphical interface shown in this paper is performed using the MATLAB software ver. 7.6.0. MATLAB software offers a wide range of techniques by which data can be displayed graphically. The process of data viewing involves a series of operations. To achieve it, I had to make three separate files, one for defining the mathematical model and two for the interface itself. Considering a fixed population, it is observed that the number of susceptible individuals diminishes along with an increase in the number of infectious individuals so that in about ten days the number of individuals infected and susceptible, respectively, has the same value. If the epidemic is not controlled, it will continue for an indefinite period of time. By changing the global parameters specific of the SIS model, a more rapid increase of infectious individuals is noted. Using the graphical user interface shown in this paper helps achieving a much easier interaction with the computer, simplifying the structure of complex instructions by using icons and menus, and, in particular, programs and files are much easier to organize. Some numerical simulations have been presented to illustrate theoretical
C4: Exploring Multiple Solutions in Graphical Models by Cluster Sampling.
Porway, Jake; Zhu, Song-Chun
2011-09-01
This paper presents a novel Markov Chain Monte Carlo (MCMC) inference algorithm called C(4)--Clustering with Cooperative and Competitive Constraints--for computing multiple solutions from posterior probabilities defined on graphical models, including Markov random fields (MRF), conditional random fields (CRF), and hierarchical models. The graphs may have both positive and negative edges for cooperative and competitive constraints. C(4) is a probabilistic clustering algorithm in the spirit of Swendsen-Wang. By turning the positive edges on/off probabilistically, C(4) partitions the graph into a number of connected components (ccps) and each ccp is a coupled subsolution with nodes connected by positive edges. Then, by turning the negative edges on/off probabilistically, C(4) obtains composite ccps (called cccps) with competing ccps connected by negative edges. At each step, C(4) flips the labels of all nodes in a cccp so that nodes in each ccp keep the same label while different ccps are assigned different labels to observe both positive and negative constraints. Thus, the algorithm can jump between multiple competing solutions (or modes of the posterior probability) in a single or a few steps. It computes multiple distinct solutions to preserve the intrinsic ambiguities and avoids premature commitments to a single solution that may not be valid given later context. C(4) achieves a mixing rate faster than existing MCMC methods, such as various Gibbs samplers and Swendsen-Wang cuts. It is also more "dynamic" than common optimization methods such as ICM, LBP, and graph cuts. We demonstrate the C(4) algorithm in line drawing interpretation, scene labeling, and object recognition.
Object-Oriented Modelling of Flexible Beams
International Nuclear Information System (INIS)
Schiavo, Francesco; Vigano, Luca; Ferretti, Gianni
2006-01-01
In this paper the problem of modelling flexible thin beams in multibody systems is tackled. The proposed model, implemented with the object-oriented modelling language Modelica, is completely modular, allowing the realization of complex systems by simple aggregation of basic components. The finite element method is employed as the basic scheme to spatially discretize the model equations. Exploiting the modular features of the language, the beam substructuring discretisation scheme (mixed finite element-finite volume) is derived as well. Selected simulation results are presented in order to validate the model with respect to both theoretical predictions and literature reference results
Replicates in high dimensions, with applications to latent variable graphical models.
Tan, Kean Ming; Ning, Yang; Witten, Daniela M; Liu, Han
2016-12-01
In classical statistics, much thought has been put into experimental design and data collection. In the high-dimensional setting, however, experimental design has been less of a focus. In this paper, we stress the importance of collecting multiple replicates for each subject in this setting. We consider learning the structure of a graphical model with latent variables, under the assumption that these variables take a constant value across replicates within each subject. By collecting multiple replicates for each subject, we are able to estimate the conditional dependence relationships among the observed variables given the latent variables. To test the null hypothesis of conditional independence between two observed variables, we propose a pairwise decorrelated score test. Theoretical guarantees are established for parameter estimation and for this test. We show that our proposal is able to estimate latent variable graphical models more accurately than some existing proposals, and apply the proposed method to a brain imaging dataset.
Hitt, O.; Hutchins, M.
2016-12-01
UK river waters face considerable future pressures, primarily from population growth and climate change. In understanding controls on river water quality, experimental studies have successfully identified response to single or paired stressors under controlled conditions. Generalised Linear Model (GLM) approaches are commonly used to quantify stressor-response relationships. To explore a wider variety of stressors physics-based models are used. Our objective is to evaluate how five different types of stressor influence the severity of river eutrophication and its impact on Dissolved Oxygen (DO) an integrated measure of river ecological health. This is done by applying a physics-based river quality model for 4 years at daily time step to a 92 km stretch in the 3445 km2 Thames (UK) catchment. To understand the impact of model structural uncertainty we present results from two alternative formulations of the biological response. Sensitivity analysis carried out using the QUESTOR model (QUality Evaluation and Simulation TOol for River systems) considered gradients of various stressors: river flow, water temperature, urbanisation (abstractions and sewage/industrial effluents), phosphate concentrations in effluents and tributaries and riparian tree shading (modifying the light input). Scalar modifiers applied to the 2009-12 time-series inputs define the gradients. The model has been run for each combination of the values of these 5 variables. Results are analysed using graphical methods in order to identify variation in the type of relationship between different pairs of stressors on the system response. The method allows for all outputs from each combination of stressors to be displayed in one graphic and so showing the results of hundreds of model runs simultaneously. This approach can be carried out for all stressor pairs, and many locations/determinands. Supporting statistical analysis (GLM) reinforces the findings from the graphical analysis. Analysis suggests that
Probabilistic Graphical Models on Multi-Core CPUs using Java 8
Masegosa, Andres R.; Martinez, Ana M.; Borchani, Hanen
2016-01-01
In this paper, we discuss software design issues related to the development of parallel computational intelligence algorithms on multi-core CPUs, using the new Java 8 functional programming features. In particular, we focus on probabilistic graphical models (PGMs) and present the parallelisation of a collection of algorithms that deal with inference and learning of PGMs from data. Namely, maximum likelihood estimation, importance sampling, and greedy search for solving combinatorial optimisat...
Object tracking using active appearance models
DEFF Research Database (Denmark)
Stegmann, Mikkel Bille
2001-01-01
This paper demonstrates that (near) real-time object tracking can be accomplished by the deformable template model; the Active Appearance Model (AAM) using only low-cost consumer electronics such as a PC and a web-camera. Successful object tracking of perspective, rotational and translational...... transformations was carried out using a training set of five images. The tracker was automatically initialised by a described multi-scale initialisation method and achieved a performance in the range of 7-10 frames per second....
Inventory of data bases, graphics packages, and models in Department of Energy laboratories
International Nuclear Information System (INIS)
Shriner, C.R.; Peck, L.J.
1978-11-01
A central inventory of energy-related environmental bibliographic and numeric data bases, graphics packages, integrated hardware/software systems, and models was established at Oak Ridge National Laboratory in an effort to make these resources at Department of Energy (DOE) laboratories better known and available to researchers and managers. This inventory will also serve to identify and avoid duplication among laboratories. The data were collected at each DOE laboratory, then sent to ORNL and merged into a single file. This document contains the data from the merged file. The data descriptions are organized under major data types: data bases, graphics packages, integrated hardware/software systems, and models. The data include descriptions of subject content, documentation, and contact persons. Also provided are computer data such as media on which the item is available, size of the item, computer on which the item executes, minimum hardware configuration necessary to execute the item, software language(s) and/or data base management system utilized, and character set used. For the models, additional data are provided to define the model more accurately. These data include a general statement of algorithms, computational methods, and theories used by the model; organizations currently using the model; the general application area of the model; sources of data utilized by the model; model validation methods, sensitivity analysis, and procedures; and general model classification. Data in this inventory will be available for on-line data retrieval on the DOE/RECON system
The IRMIS object model and services API
International Nuclear Information System (INIS)
Saunders, C.; Dohan, D.A.; Arnold, N.D.
2005-01-01
The relational model developed for the Integrated Relational Model of Installed Systems (IRMIS) toolkit has been successfully used to capture the Advanced Photon Source (APS) control system software (EPICS process variables and their definitions). The relational tables are populated by a crawler script that parses each Input/Output Controller (IOC) start-up file when an IOC reboot is detected. User interaction is provided by a Java Swing application that acts as a desktop for viewing the process variable information. Mapping between the display objects and the relational tables was carried out with the Hibernate Object Relational Modeling (ORM) framework. Work is well underway at the APS to extend the relational modeling to include control system hardware. For this work, due in part to the complex user interaction required, the primary application development environment has shifted from the relational database view to the object oriented (Java) perspective. With this approach, the business logic is executed in Java rather than in SQL stored procedures. This paper describes the object model used to represent control system software, hardware, and interconnects in IRMIS. We also describe the services API used to encapsulate the required behaviors for creating and maintaining the complex data. In addition to the core schema and object model, many important concepts in IRMIS are captured by the services API. IRMIS is an ambitious collaborative effort for defining and developing a relational database and associated applications to comprehensively document the large and complex EPICS-based control systems of today's accelerators. The documentation effort includes process variables, control system hardware, and interconnections. The approach could also be used to document all components of the accelerator, including mechanical, vacuum, power supplies, etc. One key aspect of IRMIS is that it is a documentation framework, not a design and development tool. We do not
Critically Important Object Security System Element Model
Directory of Open Access Journals (Sweden)
I. V. Khomyackov
2012-03-01
Full Text Available A stochastic model of critically important object security system element has been developed. The model includes mathematical description of the security system element properties and external influences. The state evolution of the security system element is described by the semi-Markov process with finite states number, the semi-Markov matrix and the initial semi-Markov process states probabilities distribution. External influences are set with the intensity of the Poisson thread.
Object Oriented Modelling and Dynamical Simulation
DEFF Research Database (Denmark)
Wagner, Falko Jens; Poulsen, Mikael Zebbelin
1998-01-01
This report with appendix describes the work done in master project at DTU.The goal of the project was to develop a concept for simulation of dynamical systems based on object oriented methods.The result was a library of C++-classes, for use when both building componentbased models and when...
Treinish, Lloyd A.; Gough, Michael L.; Wildenhain, W. David
1987-01-01
The capability was developed of rapidly producing visual representations of large, complex, multi-dimensional space and earth sciences data sets via the implementation of computer graphics modeling techniques on the Massively Parallel Processor (MPP) by employing techniques recently developed for typically non-scientific applications. Such capabilities can provide a new and valuable tool for the understanding of complex scientific data, and a new application of parallel computing via the MPP. A prototype system with such capabilities was developed and integrated into the National Space Science Data Center's (NSSDC) Pilot Climate Data System (PCDS) data-independent environment for computer graphics data display to provide easy access to users. While developing these capabilities, several problems had to be solved independently of the actual use of the MPP, all of which are outlined.
Probabilistic object and viewpoint models for active object recognition
CSIR Research Space (South Africa)
Govender, N
2013-09-01
Full Text Available across views to be integrated in a principled manner, and permitting a principled approach to data acquisition. Existing approaches however mostly rely on probabilistic models which make simplifying assumptions such as that features may be treated...
Zachrisson, Anders
2013-01-01
The question of what we mean by the term outer object has its roots in the epistemological foundation of psychoanalysis. From the very beginning, Freud's view was Kantian, and psychoanalysis has kept that stance, as it seems. The author reviews the internal/external issue in Freud's thinking and in the central object relations theories (Klein, Winnicott, and Bion). On this background he proposes a simple model to differentiate the concept of object along one central dimension: internal object, external object, and actual person. The main arguments are: (1) there is no direct, unmediated perception of the actual person--the experience of the other is always affected by the perceiver's subjectivity; (2) in intense transference reactions and projections, the perception of the person is dominated by the qualities of an inner object--and the other person "becomes" an external object for the perceiver; (3) when this distortion is less dominating, the other person to a higher degree remains a separate other--a person in his or her own right. Clinical material illustrates these phenomena, and a graphical picture of the model is presented. Finally with the model as background, the author comments on a selection of phenomena and concepts such as unobjectionable transference, "the third position," mourning and loneliness. The way that the internal colours and distorts the external is of course a central preoccupation of psychoanalysis generally. (Spillius et al., 2011, p. 326)
A Module for Graphical Display of Model Results with the CBP Toolbox
Energy Technology Data Exchange (ETDEWEB)
Smith, F. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2015-04-21
This report describes work performed by the Savannah River National Laboratory (SRNL) in fiscal year 2014 to add enhanced graphical capabilities to display model results in the Cementitious Barriers Project (CBP) Toolbox. Because Version 2.0 of the CBP Toolbox has just been released, the graphing enhancements described in this report have not yet been integrated into a new version of the Toolbox. Instead they have been tested using a standalone GoldSim model and, while they are substantially complete, may undergo further refinement before full implementation. Nevertheless, this report is issued to document the FY14 development efforts which will provide a basis for further development of the CBP Toolbox.
Sanchez, Julio
2003-01-01
Part I - Graphics Fundamentals PC GRAPHICS OVERVIEW History and Evolution Short History of PC Video PS/2 Video Systems SuperVGA Graphics Coprocessors and Accelerators Graphics Applications State-of-the-Art in PC Graphics 3D Application Programming Interfaces POLYGONAL MODELING Vector and Raster Data Coordinate Systems Modeling with Polygons IMAGE TRANSFORMATIONS Matrix-based Representations Matrix Arithmetic 3D Transformations PROGRAMMING MATRIX TRANSFORMATIONS Numeric Data in Matrix Form Array Processing PROJECTIONS AND RENDERING Perspective The Rendering Pipeline LIGHTING AND SHADING Lightin
Bondarenko, Irina; Raghunathan, Trivellore
2016-07-30
Multiple imputation has become a popular approach for analyzing incomplete data. Many software packages are available to multiply impute the missing values and to analyze the resulting completed data sets. However, diagnostic tools to check the validity of the imputations are limited, and the majority of the currently available methods need considerable knowledge of the imputation model. In many practical settings, however, the imputer and the analyst may be different individuals or from different organizations, and the analyst model may or may not be congenial to the model used by the imputer. This article develops and evaluates a set of graphical and numerical diagnostic tools for two practical purposes: (i) for an analyst to determine whether the imputations are reasonable under his/her model assumptions without actually knowing the imputation model assumptions; and (ii) for an imputer to fine tune the imputation model by checking the key characteristics of the observed and imputed values. The tools are based on the numerical and graphical comparisons of the distributions of the observed and imputed values conditional on the propensity of response. The methodology is illustrated using simulated data sets created under a variety of scenarios. The examples focus on continuous and binary variables, but the principles can be used to extend methods for other types of variables. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Modeling business objects with XML schema
Daum, Berthold
2003-01-01
XML Schema is the new language standard from the W3C and the new foundation for defining data in Web-based systems. There is a wealth of information available about Schemas but very little understanding of how to use this highly formal specification for creating documents. Grasping the power of Schemas means going back to the basics of documents themselves, and the semantic rules, or grammars, that define them. Written for schema designers, system architects, programmers, and document authors, Modeling Business Objects with XML Schema guides you through understanding Schemas from the basic concepts, type systems, type derivation, inheritance, namespace handling, through advanced concepts in schema design.*Reviews basic XML syntax and the Schema recommendation in detail.*Builds a knowledge base model step by step (about jazz music) that is used throughout the book.*Discusses Schema design in large environments, best practice design patterns, and Schema''s relation to object-oriented concepts.
National Research Council Canada - National Science Library
San
2001-01-01
... (OR) modeling and analysis. However, designing and implementing DES can be a time-consuming and error-prone task, This thesis designed, implemented and evaluated a tool, the Event Graph Graphical Design Tool (EGGDT...
Dynamic object-oriented geospatial modeling
Directory of Open Access Journals (Sweden)
Tomáš Richta
2010-02-01
Full Text Available Published literature about moving objects (MO simplifies the problem to the representation and storage of moving points, moving lines, or moving regions. The main insufficiency of this approach is lack of MO inner structure and dynamics modeling – the autonomy of moving agent. This paper describes basics of the object-oriented geospatial methodology for modeling complex systems consisting of agents, which move within spatial environment. The main idea is that during the agent movement, different kinds of connections with other moving or stationary objects are established or disposed, based on some spatial constraint satisfaction or nonfulfilment respectively. The methodology is constructed with regard to following two main conditions – 1 the inner behavior of agents should be represented by any formalism, e.g. Petri net, finite state machine, etc., and 2 the spatial characteristic of environment should be supplied by any information system, that is able to store defined set of spatial types, and support defined set of spatial operations. Finally, the methodology is demonstrated on simple simulation model of tram transportation system.
Robust Measurement via A Fused Latent and Graphical Item Response Theory Model.
Chen, Yunxiao; Li, Xiaoou; Liu, Jingchen; Ying, Zhiliang
2018-03-12
Item response theory (IRT) plays an important role in psychological and educational measurement. Unlike the classical testing theory, IRT models aggregate the item level information, yielding more accurate measurements. Most IRT models assume local independence, an assumption not likely to be satisfied in practice, especially when the number of items is large. Results in the literature and simulation studies in this paper reveal that misspecifying the local independence assumption may result in inaccurate measurements and differential item functioning. To provide more robust measurements, we propose an integrated approach by adding a graphical component to a multidimensional IRT model that can offset the effect of unknown local dependence. The new model contains a confirmatory latent variable component, which measures the targeted latent traits, and a graphical component, which captures the local dependence. An efficient proximal algorithm is proposed for the parameter estimation and structure learning of the local dependence. This approach can substantially improve the measurement, given no prior information on the local dependence structure. The model can be applied to measure both a unidimensional latent trait and multidimensional latent traits.
Directory of Open Access Journals (Sweden)
Prof. Patty K. Wongpakdee
2013-06-01
Full Text Available “Resurfacing Graphics” deals with the subject of unconventional design, with the purpose of engaging the viewer to experience the graphics beyond paper’s passive surface. Unconventional designs serve to reinvigorate people, whose senses are dulled by the typical, printed graphics, which bombard them each day. Today’s cutting-edge designers, illustrators and artists utilize graphics in a unique manner that allows for tactile interaction. Such works serve as valuable teaching models and encourage students to do the following: 1 investigate the trans-disciplines of art and technology; 2 appreciate that this approach can have a positive effect on the environment; 3 examine and research other approaches of design communications and 4 utilize new mediums to stretch the boundaries of artistic endeavor. This paper examines how visuals communicators are “Resurfacing Graphics” by using atypical surfaces and materials such as textile, wood, ceramics and even water. Such non-traditional transmissions of visual language serve to demonstrate student’s overreliance on paper as an outdated medium. With this exposure, students can become forward-thinking, eco-friendly, creative leaders by expanding their creative breadth and continuing the perpetual exploration for new ways to make their mark.
Directory of Open Access Journals (Sweden)
Prof. Patty K. Wongpakdee
2013-06-01
Full Text Available “Resurfacing Graphics” deals with the subject of unconventional design, with the purpose of engaging the viewer to experience the graphics beyond paper’s passive surface. Unconventional designs serve to reinvigorate people, whose senses are dulled by the typical, printed graphics, which bombard them each day. Today’s cutting-edge designers, illustrators and artists utilize graphics in a unique manner that allows for tactile interaction. Such works serve as valuable teaching models and encourage students to do the following: 1 investigate the trans-disciplines of art and technology; 2 appreciate that this approach can have a positive effect on the environment; 3 examine and research other approaches of design communications and 4 utilize new mediums to stretch the boundaries of artistic endeavor. This paper examines how visuals communicators are “Resurfacing Graphics” by using atypical surfaces and materials such as textile, wood, ceramics and even water. Such non-traditional transmissions of visual language serve to demonstrate student’s overreliance on paper as an outdated medium. With this exposure, students can become forward-thinking, eco-friendly, creative leaders by expanding their creative breadth and continuing the perpetual exploration for new ways to make their mark.
Moving object detection using keypoints reference model
Directory of Open Access Journals (Sweden)
Wan Zaki Wan Mimi Diyana
2011-01-01
Full Text Available Abstract This article presents a new method for background subtraction (BGS and object detection for a real-time video application using a combination of frame differencing and a scale-invariant feature detector. This method takes the benefits of background modelling and the invariant feature detector to improve the accuracy in various environments. The proposed method consists of three main modules, namely, modelling, matching and subtraction modules. The comparison study of the proposed method with a popular Gaussian mixture model proved that the improvement in correct classification can be increased up to 98% with a reduction of false negative and true positive rates. Beside that the proposed method has shown great potential to overcome the drawback of the traditional BGS in handling challenges like shadow effect and lighting fluctuation.
Optimizing ion channel models using a parallel genetic algorithm on graphical processors.
Ben-Shalom, Roy; Aviv, Amit; Razon, Benjamin; Korngreen, Alon
2012-01-01
We have recently shown that we can semi-automatically constrain models of voltage-gated ion channels by combining a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols. Although numerically successful, this approach is highly demanding computationally, with optimization on a high performance Linux cluster typically lasting several days. To solve this computational bottleneck we converted our optimization algorithm for work on a graphical processing unit (GPU) using NVIDIA's CUDA. Parallelizing the process on a Fermi graphic computing engine from NVIDIA increased the speed ∼180 times over an application running on an 80 node Linux cluster, considerably reducing simulation times. This application allows users to optimize models for ion channel kinetics on a single, inexpensive, desktop "super computer," greatly reducing the time and cost of building models relevant to neuronal physiology. We also demonstrate that the point of algorithm parallelization is crucial to its performance. We substantially reduced computing time by solving the ODEs (Ordinary Differential Equations) so as to massively reduce memory transfers to and from the GPU. This approach may be applied to speed up other data intensive applications requiring iterative solutions of ODEs. Copyright © 2012 Elsevier B.V. All rights reserved.
Bilingual Object Naming: A Connectionist Model.
Fang, Shin-Yi; Zinszer, Benjamin D; Malt, Barbara C; Li, Ping
2016-01-01
Patterns of object naming often differ between languages, but bilingual speakers develop convergent naming patterns in their two languages that are distinct from those of monolingual speakers of each language. This convergence appears to reflect interactions between lexical representations for the two languages. In this study, we developed a self-organizing connectionist model to simulate semantic convergence in the bilingual lexicon and investigate the mechanisms underlying this semantic convergence. We examined the similarity of patterns in the simulated data to empirical data from past research, and we identified how semantic convergence was manifested in the simulated bilingual lexical knowledge. Furthermore, we created impaired models in which components of the network were removed so as to examine the importance of the relevant components on bilingual object naming. Our results demonstrate that connections between two languages' lexicons can be established through the simultaneous activations of related words in the two languages. These connections between languages allow the outputs of their lexicons to become more similar, that is, to converge. Our model provides a basis for future computational studies of how various input variables may affect bilingual naming patterns.
Anquez, Jérémie; Boubekeur, Tamy; Bibin, Lazar; Angelini, Elsa; Bloch, Isabelle
2009-01-01
Potential sanitary effects related to electromagnetic fields exposure raise public concerns, especially for fetuses during pregnancy. Human fetus exposure can only be assessed through simulated dosimetry studies, performed on anthropomorphic models of pregnant women. In this paper, we propose a new methodology to generate a set of detailed utero-fetal unit (UFU) 3D models during the first and third trimesters of pregnancy, based on segmented 3D ultrasound and MRI data. UFU models are built using recent geometry processing methods derived from mesh-based computer graphics techniques and embedded in a synthetic woman body. Nine pregnant woman models have been generated using this approach and validated by obstetricians, for anatomical accuracy and representativeness.
Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang
2011-01-01
An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717
Comfort constrains graphic workspace: test results of a 3D forearm model.
Schillings, J J; Thomassen, A J; Meulenbroek, R G
2000-01-01
Human movement performance is subject to many physical and psychological constraints. Analyses of these constraints may not only improve our understanding of the performance aspects that subjects need to keep under continuous control, but may also shed light on the possible origins of specific behavioral preferences that people display in motor tasks. The goal of the present paper is to make an empirical contribution here. In a recent simulation study, we reported effects of pen-grip and forearm-posture constraints on the spatial characteristics of the pen tip's workspace in drawing. The effects concerned changes in the location, size, and orientation of the reachable part of the writing plane, as well as variations in the computed degree of comfort in the hand and finger postures required to reach the various parts of this area. The present study is aimed at empirically evaluating to what extent these effects influence subjects' graphic behavior in a simple, free line-drawing task. The task involved the production of small back-and-forth drawing movements in various directions, to be chosen randomly under three forearm-posture and five pen-grip conditions. The observed variations in the subjects' choice of starting positions showed a high level of agreement with those of the simulated graphic-area locations, showing that biomechanically defined comfort of starting postures is indeed a determinant of the selection of starting points. Furthermore, between-condition rotations in the frequency distributions of the realized stroke directions corresponded to the simulation results, which again confirms the importance of comfort in directional preferences. It is concluded that postural rather than spatial constraints primarily affect subjects' preferences for starting positions and stroke directions in graphic motor performance. The relevance of the present modelling approach and its results for the broader field of complex motor behavior, including the manipulation of
Ontological model for representation of learning objectives
Ng, Lai
2005-01-01
Learning objects have been used to provide personalized learning experiences. In particular, sequenced learning objects are recommended according to unique individual learning objectives. The opportunity for personalization by learning objectives is not fully exploited due to limited and duplicated efforts in creating learning objectives and connecting them with learning objects. Additionally, current standardization efforts do not offer sufficient support of automatic discovery of learning o...
Numerical modeling of the motion of deformable ellipsoidal objects in slow viscous flows
Jiang, Dazhi
2007-03-01
An algorithm for modeling the strain and rotation of deformable ellipsoidal objects in viscous flows based on Eshelby's (1957. Proceedings of the Royal Society of London A241, 376-396) theory is presented and is implemented in a fully graphic mathematics application (Mathcad ®, http://www.mathsoft.com). The algorithm resolves all singular cases encountered in modeling large finite deformations. The orientation of ellipsoidal objects is specified in terms of polar coordinate angles which are easily converted to the trend and plunge angles of the three principal axes rather than the Euler angles. With the Mathcad worksheets presented in the supplementary data associated with this paper, one can model the strain and rotation paths of individual deformable objects and the development of preferred orientation and shape fabrics for a population of deformable objects in any homogeneous viscous flow. The shape and preferred orientation fabrics for a population of deformable objects can be presented in both a three-dimensional form and a two-dimensional form, allowing easy comparison between field data and model predictions. The full graphic interface of Mathcad ® makes using the worksheets as easy as using a spreadsheet. The modeler can interact fully with the computation and customize the type and format of the output data to best fit the purpose of the investigation and to facilitate the comparison of model predictions with geological observations.
Path generation algorithm for UML graphic modeling of aerospace test software
Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Chen, Chao
2018-03-01
Aerospace traditional software testing engineers are based on their own work experience and communication with software development personnel to complete the description of the test software, manual writing test cases, time-consuming, inefficient, loopholes and more. Using the high reliability MBT tools developed by our company, the one-time modeling can automatically generate test case documents, which is efficient and accurate. UML model to describe the process accurately express the need to rely on the path is reached, the existing path generation algorithm are too simple, cannot be combined into a path and branch path with loop, or too cumbersome, too complicated arrangement generates a path is meaningless, for aerospace software testing is superfluous, I rely on our experience of ten load space, tailor developed a description of aerospace software UML graphics path generation algorithm.
Chęciński, Jakub; Frankowski, Marek
2016-10-01
We present a tool for fully-automated generation of both simulations configuration files (Mif) and Matlab scripts for automated data analysis, dedicated for Object Oriented Micromagnetic Framework (OOMMF). We introduce extended graphical user interface (GUI) that allows for fast, error-proof and easy creation of Mifs, without any programming skills usually required for manual Mif writing necessary. With MAGE we provide OOMMF extensions for complementing it by mangetoresistance and spin-transfer-torque calculations, as well as local magnetization data selection for output. Our software allows for creation of advanced simulations conditions like simultaneous parameters sweeps and synchronic excitation application. Furthermore, since output of such simulation could be long and complicated we provide another GUI allowing for automated creation of Matlab scripts suitable for analysis of such data with Fourier and wavelet transforms as well as user-defined operations.
A scale-free structure prior for graphical models with applications in functional genomics.
Directory of Open Access Journals (Sweden)
Paul Sheridan
Full Text Available The problem of reconstructing large-scale, gene regulatory networks from gene expression data has garnered considerable attention in bioinformatics over the past decade with the graphical modeling paradigm having emerged as a popular framework for inference. Analysis in a full Bayesian setting is contingent upon the assignment of a so-called structure prior-a probability distribution on networks, encoding a priori biological knowledge either in the form of supplemental data or high-level topological features. A key topological consideration is that a wide range of cellular networks are approximately scale-free, meaning that the fraction, , of nodes in a network with degree is roughly described by a power-law with exponent between and . The standard practice, however, is to utilize a random structure prior, which favors networks with binomially distributed degree distributions. In this paper, we introduce a scale-free structure prior for graphical models based on the formula for the probability of a network under a simple scale-free network model. Unlike the random structure prior, its scale-free counterpart requires a node labeling as a parameter. In order to use this prior for large-scale network inference, we design a novel Metropolis-Hastings sampler for graphical models that includes a node labeling as a state space variable. In a simulation study, we demonstrate that the scale-free structure prior outperforms the random structure prior at recovering scale-free networks while at the same time retains the ability to recover random networks. We then estimate a gene association network from gene expression data taken from a breast cancer tumor study, showing that scale-free structure prior recovers hubs, including the previously unknown hub SLC39A6, which is a zinc transporter that has been implicated with the spread of breast cancer to the lymph nodes. Our analysis of the breast cancer expression data underscores the value of the scale
A Knowledge Modeling Method for Computer Graphics Design & Production Based on Ontology
Directory of Open Access Journals (Sweden)
Chen Tong
2017-01-01
Full Text Available As one of the most critical stages of CG (Computer Graphics industry, CG design & production needs the support of professional knowledge and practice experience of multidisciplinary. With the outstanding performance in knowledge sharing, integration and reuse, knowledge modeling could increase greatly the efficiency, reduce the cost and avoid repeated error in CG design & production. However, knowledge modeling of CG design & production differs greatly from those of other fields. On the one hand, it is similar to physical product design, which involves great deal of tacit knowledge such as modeling skills, reasoning knowledge and so on. On the other hand, as film, CG design & production needs a lot of unstructured description information. The heterogeneity between physical product and film makes knowledge modeling more complicated. Thus a systematic knowledge modelling method based on Ontology is proposed to aid CG design & production in this paper. CG animation knowledge is capture and organized from viewpoint of three aspects: requirements and design and production. The knowledge are categorized into static and dynamic knowledge, and Ontology is adopted to construct a hierarchic model to organize the knowledge, so as to offer a uniform communication semantic foundations for designers from different fields. Based on animation script, the CG design task model is proposed to drive the organization and management of different knowledge involved in CG design & production. Finally, we apply this method in the knowledge modeling of naked-eye animation design and production to illustrate effectiveness of this method.
Possibility of object recognition using Altera's model based design approach
International Nuclear Information System (INIS)
Tickle, A J; Harvey, P K; Smith, J S; Wu, F
2009-01-01
Object recognition is an image processing task of finding a given object in a selected image or video sequence. Object recognition can be divided into two areas: one of these is decision-theoretic and deals with patterns described by quantitative descriptors, for example such as length, area, shape and texture. With this Graphical User Interface Circuitry (GUIC) methodology employed here being relatively new for object recognition systems, the aim of this work is to identify if the developed circuitry can detect certain shapes or strings within the target image. A much smaller reference image feeds the preset data for identification, tests are conducted for both binary and greyscale and the additional mathematical morphology to highlight the area within the target image with the object(s) are located is also presented. This then provides proof that basic recognition methods are valid and would allow the progression to developing decision-theoretical and learning based approaches using GUICs for use in multidisciplinary tasks.
Developing natural resource models using the object modeling system: feasibility and challenges
Directory of Open Access Journals (Sweden)
L. R. Ahuja
2005-01-01
Full Text Available Current challenges in natural resource management have created demand for integrated, flexible, and easily parameterized hydrologic models. Most of these monolithic models are not modular, thus modifications (e.g., changes in process representation require considerable time, effort, and expense. In this paper, the feasibility and challenges of using the Object Modeling System (OMS for natural resource model development will be explored. The OMS is a Java-based modeling framework that facilitates simulation model development, evaluation, and deployment. In general, the OMS consists of a library of science, control, and database modules and a means to assemble the selected modules into an application-specific modeling package. The framework is supported by data dictionary, data retrieval, GIS, graphical visualization, and statistical analysis utility modules. Specific features of the OMS that will be discussed include: 1 how to reduce duplication of effort in natural resource modeling; 2 how to make natural resource models easier to build, apply, and evaluate; 3 how to facilitate long-term maintainability of existing and new natural resource models; and 4 how to improve the quality of natural resource model code and ensure credibility of model implementations. Examples of integrating a simple water balance model and a large monolithic model into the OMS will be presented.
Mathematical structures for computer graphics
Janke, Steven J
2014-01-01
A comprehensive exploration of the mathematics behind the modeling and rendering of computer graphics scenes Mathematical Structures for Computer Graphics presents an accessible and intuitive approach to the mathematical ideas and techniques necessary for two- and three-dimensional computer graphics. Focusing on the significant mathematical results, the book establishes key algorithms used to build complex graphics scenes. Written for readers with various levels of mathematical background, the book develops a solid foundation for graphics techniques and fills in relevant grap
Shaded computer graphic techniques for visualizing and interpreting analytic fluid flow models
Parke, F. I.
1981-01-01
Mathematical models which predict the behavior of fluid flow in different experiments are simulated using digital computers. The simulations predict values of parameters of the fluid flow (pressure, temperature and velocity vector) at many points in the fluid. Visualization of the spatial variation in the value of these parameters is important to comprehend and check the data generated, to identify the regions of interest in the flow, and for effectively communicating information about the flow to others. The state of the art imaging techniques developed in the field of three dimensional shaded computer graphics is applied to visualization of fluid flow. Use of an imaging technique known as 'SCAN' for visualizing fluid flow, is studied and the results are presented.
Glossiness of Colored Papers based on Computer Graphics Model and Its Measuring Method
Aida, Teizo
In the case of colored papers, the color of surface effects strongly upon the gloss of its paper. The new glossiness for such a colored paper is suggested in this paper. First, using the Achromatic and Chromatic Munsell colored chips, the author obtained experimental equation which represents the relation between lightness V ( or V and saturation C ) and psychological glossiness Gph of these chips. Then, the author defined a new glossiness G for the colored papers, based on the above mentioned experimental equations Gph and Cook-Torrance's reflection model which are widely used in the filed of Computer Graphics. This new glossiness is shown to be nearly proportional to the psychological glossiness Gph. The measuring system for the new glossiness G is furthermore descrived. The measuring time for one specimen is within 1 minute.
uPy: a ubiquitous computer graphics Python API with Biological Modeling Applications
Autin, L.; Johnson, G.; Hake, J.; Olson, A.; Sanner, M.
2015-01-01
In this paper we describe uPy, an extension module for the Python programming language that provides a uniform abstraction of the APIs of several 3D computer graphics programs called hosts, including: Blender, Maya, Cinema4D, and DejaVu. A plugin written with uPy is a unique piece of code that will run in all uPy-supported hosts. We demonstrate the creation of complex plug-ins for molecular/cellular modeling and visualization and discuss how uPy can more generally simplify programming for many types of projects (not solely science applications) intended for multi-host distribution. uPy is available at http://upy.scripps.edu PMID:24806987
Graphics Processing Unit (GPU) Acceleration of the Goddard Earth Observing System Atmospheric Model
Putnam, Williama
2011-01-01
The Goddard Earth Observing System 5 (GEOS-5) is the atmospheric model used by the Global Modeling and Assimilation Office (GMAO) for a variety of applications, from long-term climate prediction at relatively coarse resolution, to data assimilation and numerical weather prediction, to very high-resolution cloud-resolving simulations. GEOS-5 is being ported to a graphics processing unit (GPU) cluster at the NASA Center for Climate Simulation (NCCS). By utilizing GPU co-processor technology, we expect to increase the throughput of GEOS-5 by at least an order of magnitude, and accelerate the process of scientific exploration across all scales of global modeling, including: The large-scale, high-end application of non-hydrostatic, global, cloud-resolving modeling at 10- to I-kilometer (km) global resolutions Intermediate-resolution seasonal climate and weather prediction at 50- to 25-km on small clusters of GPUs Long-range, coarse-resolution climate modeling, enabled on a small box of GPUs for the individual researcher After being ported to the GPU cluster, the primary physics components and the dynamical core of GEOS-5 have demonstrated a potential speedup of 15-40 times over conventional processor cores. Performance improvements of this magnitude reduce the required scalability of 1-km, global, cloud-resolving models from an unfathomable 6 million cores to an attainable 200,000 GPU-enabled cores.
Balfer, Jenny; Bajorath, Jürgen
2014-09-22
Supervised machine learning models are widely used in chemoinformatics, especially for the prediction of new active compounds or targets of known actives. Bayesian classification methods are among the most popular machine learning approaches for the prediction of activity from chemical structure. Much work has focused on predicting structure-activity relationships (SARs) on the basis of experimental training data. By contrast, only a few efforts have thus far been made to rationalize the performance of Bayesian or other supervised machine learning models and better understand why they might succeed or fail. In this study, we introduce an intuitive approach for the visualization and graphical interpretation of naïve Bayesian classification models. Parameters derived during supervised learning are visualized and interactively analyzed to gain insights into model performance and identify features that determine predictions. The methodology is introduced in detail and applied to assess Bayesian modeling efforts and predictions on compound data sets of varying structural complexity. Different classification models and features determining their performance are characterized in detail. A prototypic implementation of the approach is provided.
Developing a multiscale, multi-resolution agent-based brain tumor model by graphics processing units
Directory of Open Access Journals (Sweden)
Zhang Le
2011-12-01
Full Text Available Abstract Multiscale agent-based modeling (MABM has been widely used to simulate Glioblastoma Multiforme (GBM and its progression. At the intracellular level, the MABM approach employs a system of ordinary differential equations to describe quantitatively specific intracellular molecular pathways that determine phenotypic switches among cells (e.g. from migration to proliferation and vice versa. At the intercellular level, MABM describes cell-cell interactions by a discrete module. At the tissue level, partial differential equations are employed to model the diffusion of chemoattractants, which are the input factors of the intracellular molecular pathway. Moreover, multiscale analysis makes it possible to explore the molecules that play important roles in determining the cellular phenotypic switches that in turn drive the whole GBM expansion. However, owing to limited computational resources, MABM is currently a theoretical biological model that uses relatively coarse grids to simulate a few cancer cells in a small slice of brain cancer tissue. In order to improve this theoretical model to simulate and predict actual GBM cancer progression in real time, a graphics processing unit (GPU-based parallel computing algorithm was developed and combined with the multi-resolution design to speed up the MABM. The simulated results demonstrated that the GPU-based, multi-resolution and multiscale approach can accelerate the previous MABM around 30-fold with relatively fine grids in a large extracellular matrix. Therefore, the new model has great potential for simulating and predicting real-time GBM progression, if real experimental data are incorporated.
Quantum-Assisted Learning of Hardware-Embedded Probabilistic Graphical Models
Directory of Open Access Journals (Sweden)
Marcello Benedetti
2017-11-01
Full Text Available Mainstream machine-learning techniques such as deep learning and probabilistic programming rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speed up these tasks or to make them more effective. However, some pressing challenges in state-of-the-art quantum annealers have to be overcome before we can assess their actual performance. The sparse connectivity, resulting from the local interaction between quantum bits in physical hardware implementations, is considered the most severe limitation to the quality of constructing powerful generative unsupervised machine-learning models. Here, we use embedding techniques to add redundancy to data sets, allowing us to increase the modeling capacity of quantum annealers. We illustrate our findings by training hardware-embedded graphical models on a binarized data set of handwritten digits and two synthetic data sets in experiments with up to 940 quantum bits. Our model can be trained in quantum hardware without full knowledge of the effective parameters specifying the corresponding quantum Gibbs-like distribution; therefore, this approach avoids the need to infer the effective temperature at each iteration, speeding up learning; it also mitigates the effect of noise in the control parameters, making it robust to deviations from the reference Gibbs distribution. Our approach demonstrates the feasibility of using quantum annealers for implementing generative models, and it provides a suitable framework for benchmarking these quantum technologies on machine-learning-related tasks.
Quantum-Assisted Learning of Hardware-Embedded Probabilistic Graphical Models
Benedetti, Marcello; Realpe-Gómez, John; Biswas, Rupak; Perdomo-Ortiz, Alejandro
2017-10-01
Mainstream machine-learning techniques such as deep learning and probabilistic programming rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speed up these tasks or to make them more effective. However, some pressing challenges in state-of-the-art quantum annealers have to be overcome before we can assess their actual performance. The sparse connectivity, resulting from the local interaction between quantum bits in physical hardware implementations, is considered the most severe limitation to the quality of constructing powerful generative unsupervised machine-learning models. Here, we use embedding techniques to add redundancy to data sets, allowing us to increase the modeling capacity of quantum annealers. We illustrate our findings by training hardware-embedded graphical models on a binarized data set of handwritten digits and two synthetic data sets in experiments with up to 940 quantum bits. Our model can be trained in quantum hardware without full knowledge of the effective parameters specifying the corresponding quantum Gibbs-like distribution; therefore, this approach avoids the need to infer the effective temperature at each iteration, speeding up learning; it also mitigates the effect of noise in the control parameters, making it robust to deviations from the reference Gibbs distribution. Our approach demonstrates the feasibility of using quantum annealers for implementing generative models, and it provides a suitable framework for benchmarking these quantum technologies on machine-learning-related tasks.
Inference in Graphical Gaussian Models with Edge and Vertex Symmetries with the gRc Package for R
DEFF Research Database (Denmark)
Højsgaard, Søren; Lauritzen, Steffen L
2007-01-01
In this paper we present the R package gRc for statistical inference in graphical Gaussian models in which symmetry restrictions have been imposed on the concentration or partial correlation matrix. The models are represented by coloured graphs where parameters associated with edges or vertices...
Repositioning the knee joint in human body FE models using a graphics-based technique.
Jani, Dhaval; Chawla, Anoop; Mukherjee, Sudipto; Goyal, Rahul; Vusirikala, Nataraju; Jayaraman, Suresh
2012-01-01
Human body finite element models (FE-HBMs) are available in standard occupant or pedestrian postures. There is a need to have FE-HBMs in the same posture as a crash victim or to be configured in varying postures. Developing FE models for all possible positions is not practically viable. The current work aims at obtaining a posture-specific human lower extremity model by reconfiguring an existing one. A graphics-based technique was developed to reposition the lower extremity of an FE-HBM by specifying the flexion-extension angle. Elements of the model were segregated into rigid (bones) and deformable components (soft tissues). The bones were rotated about the flexion-extension axis followed by rotation about the longitudinal axis to capture the twisting of the tibia. The desired knee joint movement was thus achieved. Geometric heuristics were then used to reposition the skin. A mapping defined over the space between bones and the skin was used to regenerate the soft tissues. Mesh smoothing was then done to augment mesh quality. The developed method permits control over the kinematics of the joint and maintains the initial mesh quality of the model. For some critical areas (in the joint vicinity) where element distortion is large, mesh smoothing is done to improve mesh quality. A method to reposition the knee joint of a human body FE model was developed. Repositions of a model from 9 degrees of flexion to 90 degrees of flexion in just a few seconds without subjective interventions was demonstrated. Because the mesh quality of the repositioned model was maintained to a predefined level (typically to the level of a well-made model in the initial configuration), the model was suitable for subsequent simulations.
A fast mass spring model solver for high-resolution elastic objects
Zheng, Mianlun; Yuan, Zhiyong; Zhu, Weixu; Zhang, Guian
2017-03-01
Real-time simulation of elastic objects is of great importance for computer graphics and virtual reality applications. The fast mass spring model solver can achieve visually realistic simulation in an efficient way. Unfortunately, this method suffers from resolution limitations and lack of mechanical realism for a surface geometry model, which greatly restricts its application. To tackle these problems, in this paper we propose a fast mass spring model solver for high-resolution elastic objects. First, we project the complex surface geometry model into a set of uniform grid cells as cages through *cages mean value coordinate method to reflect its internal structure and mechanics properties. Then, we replace the original Cholesky decomposition method in the fast mass spring model solver with a conjugate gradient method, which can make the fast mass spring model solver more efficient for detailed surface geometry models. Finally, we propose a graphics processing unit accelerated parallel algorithm for the conjugate gradient method. Experimental results show that our method can realize efficient deformation simulation of 3D elastic objects with visual reality and physical fidelity, which has a great potential for applications in computer animation.
Energy balance and the Malthusian parameter, m, of grazing small rodents : A graphic model.
Stenseth, Nils Chr
1978-01-01
A graphic model based on cost and gain functions is developed for predicting the relative magnitude of the Malthusian parameter, m, for different phenotypes. The analysis is mainly restricted to grazing small rodents. The cost function is derived by depicting the probability of death due to predation, parasitism etc. as a function of time spent outside the nest. The gain function is derived by comparing the energy obtained by digestion, with energy used (or needed) for maintenance metabolism, both when outside and inside the nest.The model is applied for predicting the relative magnitude of the Malthusian parameter of small versus large phenotypes of grazing rodents. Of these, the smaller phenotypes are concluded to have the larger Malthusian parameter. This may not hold true for hunters (granivores and predators). These conclusions are used for reinterpreting the often observed geographical size trend in warm-blooded vertebrates (Bergmann's rule).The model is further applied to the determination of the relative magnitude of the Malthusian parameter for aggresive and docile strategies hypothesized in Chitty's theory for fluctuating populations. Of these, the aggressive strategy is concluded to have the lowest Malthusian parameter. Although not verifying Chitty's theory, these results support the earlier hypothesis that the aggressive strategy may under certain situations have lower survival. Based on the present model, nothing can be said about whether or not a polymorphic population as hypothesized by Chitty will exhibit oscillations.
Structural and Functional Model of Organization of Geometric and Graphic Training of the Students
Poluyanov, Valery B.; Pyankova, Zhanna A.; Chukalkina, Marina I.; Smolina, Ekaterina S.
2016-01-01
The topicality of the investigated problem is stipulated by the social need for training competitive engineers with a high level of graphical literacy; especially geometric and graphic training of students and its projected results in a competence-based approach; individual characteristics and interests of the students, as well as methodological…
Meznarich, R. A.; Shava, R. C.; Lightner, S. L.
2009-01-01
Engineering design graphics courses taught in colleges or universities should provide and equip students preparing for employment with the basic occupational graphics skill competences required by engineering and technology disciplines. Academic institutions should introduce and include topics that cover the newer and more efficient graphics…
Directory of Open Access Journals (Sweden)
Andres eOrtiz
2015-11-01
Full Text Available Alzheimer’s Disease (AD is the most common neurodegenerative disease in elderly people. Itsdevelopment has been shown to be closely related to changes in the brain connectivity networkand in the brain activation patterns along with structural changes caused by the neurodegenerativeprocess.Methods to infer dependence between brain regions are usually derived from the analysis ofcovariance between activation levels in the different areas. However, these covariance-basedmethods are not able to estimate conditional independence between variables to factor out theinfluence of other regions. Conversely, models based on the inverse covariance, or precisionmatrix, such as Sparse Gaussian Graphical Models allow revealing conditional independencebetween regions by estimating the covariance between two variables given the rest as constant.This paper uses Sparse Inverse Covariance Estimation (SICE methods to learn undirectedgraphs in order to derive functional and structural connectivity patterns from Fludeoxyglucose(18F-FDG Position Emission Tomography (PET data and segmented Magnetic Resonanceimages (MRI, drawn from the ADNI database, for Control, MCI (Mild Cognitive ImpairmentSubjects and AD subjects. Sparse computation fits perfectly here as brain regions usually onlyinteract with a few other areas.The models clearly show different metabolic covariation patters between subject groups, revealingthe loss of strong connections in AD and MCI subjects when compared to Controls. Similarly,the variance between GM (Grey Matter densities of different regions reveals different structuralcovariation patterns between the different groups. Thus, the different connectivity patterns forcontrols and AD are used in this paper to select regions of interest in PET and GM images withdiscriminative power for early AD diagnosis. Finally, functional an structural models are combinedto leverage the classification accuracy.The results obtained in this work show the usefulness
Kim, Jane Paik; Roberts, Laura Weiss
Empirical ethics inquiry works from the notion that stakeholder perspectives are necessary for gauging the ethical acceptability of human studies and assuring that research aligns with societal expectations. Although common, studies involving different populations often entail comparisons of trends that problematize the interpretation of results. Using graphical model selection - a technique aimed at transcending limitations of conventional methods - this report presents data on the ethics of clinical research with two objectives: (1) to display the patterns of views held by ill and healthy individuals in clinical research as a test of the study's original hypothesis and (2) to introduce graphical model selection as a key analytic tool for ethics research. In this IRB-approved, NIH-funded project, data were collected from 60 mentally ill and 43 physically ill clinical research protocol volunteers, 47 healthy protocol-consented participants, and 29 healthy individuals without research protocol experience. Respondents were queried on the ethical acceptability of research involving people with mental and physical illness (i.e., cancer, HIV, depression, schizophrenia, and post-traumatic stress disorder) and non-illness related sources of vulnerability (e.g., age, class, gender, ethnicity). Using a statistical algorithm, we selected graphical models to display interrelationships among responses to questions. Both mentally and physically ill protocol volunteers revealed a high degree of connectivity among ethically-salient perspectives. Healthy participants, irrespective of research protocol experience, revealed patterns of views that were not highly connected. Between ill and healthy protocol participants, the pattern of views is vastly different. Experience with illness was tied to dense connectivity, whereas healthy individuals expressed views with sparse connections. In offering a nuanced perspective on the interrelation of ethically relevant responses, graphical
Accelerating a hydrological uncertainty ensemble model using graphics processing units (GPUs)
Tristram, D.; Hughes, D.; Bradshaw, K.
2014-01-01
The practical application of hydrological uncertainty models that are designed to generate multiple ensembles can be severely restricted by the available computer processing power and thus, the time taken to generate the results. CPU clusters can help in this regard, but are often costly to use continuously and maintain, causing scientists to look elsewhere for speed improvements. The use of powerful graphics processing units (GPUs) for application acceleration has become a recent trend, owing to their low cost per FLOP, and their highly parallel and throughput-oriented architecture, which makes them ideal for many scientific applications. However, programming these devices efficiently is non-trivial, seemingly making their use impractical for many researchers. In this study, we investigate whether redesigning the CPU code of an adapted Pitman rainfall-runoff uncertainty model is necessary to obtain a satisfactory speedup on GPU devices. A twelvefold speedup over a multithreaded CPU implementation was achieved by using a modern GPU with minimal changes to the model code. This success leads us to believe that redesigning code for the GPU is not always necessary to obtain a worthwhile speedup.
Skataric, Maja; Bose, Sandip; Zeroug, Smaine; Tilke, Peter
2017-02-01
It is not uncommon in the field of non-destructive evaluation that multiple measurements encompassing a variety of modalities are available for analysis and interpretation for determining the underlying states of nature of the materials or parts being tested. Despite and sometimes due to the richness of data, significant challenges arise in the interpretation manifested as ambiguities and inconsistencies due to various uncertain factors in the physical properties (inputs), environment, measurement device properties, human errors, and the measurement data (outputs). Most of these uncertainties cannot be described by any rigorous mathematical means, and modeling of all possibilities is usually infeasible for many real time applications. In this work, we will discuss an approach based on Hierarchical Bayesian Graphical Models (HBGM) for the improved interpretation of complex (multi-dimensional) problems with parametric uncertainties that lack usable physical models. In this setting, the input space of the physical properties is specified through prior distributions based on domain knowledge and expertise, which are represented as Gaussian mixtures to model the various possible scenarios of interest for non-destructive testing applications. Forward models are then used offline to generate the expected distribution of the proposed measurements which are used to train a hierarchical Bayesian network. In Bayesian analysis, all model parameters are treated as random variables, and inference of the parameters is made on the basis of posterior distribution given the observed data. Learned parameters of the posterior distribution obtained after the training can therefore be used to build an efficient classifier for differentiating new observed data in real time on the basis of pre-trained models. We will illustrate the implementation of the HBGM approach to ultrasonic measurements used for cement evaluation of cased wells in the oil industry.
Thompson, John
2009-01-01
Graphic storytelling is a medium that allows students to make and share stories, while developing their art communication skills. American comics today are more varied in genre, approach, and audience than ever before. When considering the impact of Japanese manga on the youth, graphic storytelling emerges as a powerful player in pop culture. In…
Directory of Open Access Journals (Sweden)
Kateryna P. Osadcha
2017-12-01
Full Text Available The article is devoted to some aspects of the formation of future bachelor's graphic competence in computer sciences while teaching the fundamentals for working with three-dimensional modelling means. The analysis, classification and systematization of three-dimensional modelling means are given. The aim of research consists in investigating the set of instruments and classification of three-dimensional modelling means and correlation of skills, which are being formed, concerning inquired ones at the labour market in order to use them further in the process of forming graphic competence during training future bachelors in computer sciences. The peculiarities of the process of forming future bachelor's graphic competence in computer sciences by means of revealing, analyzing and systematizing three-dimensional modelling means and types of three-dimensional graphics at present stage of the development of informational technologies are traced a line round. The result of the research is a soft-ware choice in three-dimensional modelling for the process of training future bachelors in computer sciences.
Regularized estimation of large-scale gene association networks using graphical Gaussian models.
Krämer, Nicole; Schäfer, Juliane; Boulesteix, Anne-Laure
2009-11-24
Graphical Gaussian models are popular tools for the estimation of (undirected) gene association networks from microarray data. A key issue when the number of variables greatly exceeds the number of samples is the estimation of the matrix of partial correlations. Since the (Moore-Penrose) inverse of the sample covariance matrix leads to poor estimates in this scenario, standard methods are inappropriate and adequate regularization techniques are needed. Popular approaches include biased estimates of the covariance matrix and high-dimensional regression schemes, such as the Lasso and Partial Least Squares. In this article, we investigate a general framework for combining regularized regression methods with the estimation of Graphical Gaussian models. This framework includes various existing methods as well as two new approaches based on ridge regression and adaptive lasso, respectively. These methods are extensively compared both qualitatively and quantitatively within a simulation study and through an application to six diverse real data sets. In addition, all proposed algorithms are implemented in the R package "parcor", available from the R repository CRAN. In our simulation studies, the investigated non-sparse regression methods, i.e. Ridge Regression and Partial Least Squares, exhibit rather conservative behavior when combined with (local) false discovery rate multiple testing in order to decide whether or not an edge is present in the network. For networks with higher densities, the difference in performance of the methods decreases. For sparse networks, we confirm the Lasso's well known tendency towards selecting too many edges, whereas the two-stage adaptive Lasso is an interesting alternative that provides sparser solutions. In our simulations, both sparse and non-sparse methods are able to reconstruct networks with cluster structures. On six real data sets, we also clearly distinguish the results obtained using the non-sparse methods and those obtained
Regularized estimation of large-scale gene association networks using graphical Gaussian models
Directory of Open Access Journals (Sweden)
Schäfer Juliane
2009-11-01
Full Text Available Abstract Background Graphical Gaussian models are popular tools for the estimation of (undirected gene association networks from microarray data. A key issue when the number of variables greatly exceeds the number of samples is the estimation of the matrix of partial correlations. Since the (Moore-Penrose inverse of the sample covariance matrix leads to poor estimates in this scenario, standard methods are inappropriate and adequate regularization techniques are needed. Popular approaches include biased estimates of the covariance matrix and high-dimensional regression schemes, such as the Lasso and Partial Least Squares. Results In this article, we investigate a general framework for combining regularized regression methods with the estimation of Graphical Gaussian models. This framework includes various existing methods as well as two new approaches based on ridge regression and adaptive lasso, respectively. These methods are extensively compared both qualitatively and quantitatively within a simulation study and through an application to six diverse real data sets. In addition, all proposed algorithms are implemented in the R package "parcor", available from the R repository CRAN. Conclusion In our simulation studies, the investigated non-sparse regression methods, i.e. Ridge Regression and Partial Least Squares, exhibit rather conservative behavior when combined with (local false discovery rate multiple testing in order to decide whether or not an edge is present in the network. For networks with higher densities, the difference in performance of the methods decreases. For sparse networks, we confirm the Lasso's well known tendency towards selecting too many edges, whereas the two-stage adaptive Lasso is an interesting alternative that provides sparser solutions. In our simulations, both sparse and non-sparse methods are able to reconstruct networks with cluster structures. On six real data sets, we also clearly distinguish the results
GRAPHICAL MODELLING OF THE OBJECTS – A BASIC ELEMENT IN TEACHING TECHNICAL DRAWING
Directory of Open Access Journals (Sweden)
CLINCIU Ramona
2015-06-01
Full Text Available The paper presents applications developed using AutoCAD and 3D Studio MAX programs. The purpose of the applications is represented by the development of the spatial abilities of the students and they have frequent use in teaching technical drawing, for the understanding of the representation of the orthogonal projections of the parts, as well as for the construction of their axonometric projection.
Directory of Open Access Journals (Sweden)
Antonio Luis Ampliato Briones
2014-10-01
Full Text Available This paper primarily reflects on the need to create graphical codes for producing images intended to communicate architecture. Each step of the drawing needs to be a deliberate process in which the proposed code highlights the relationship between architectural theory and graphic action. Our aim is not to draw the result of the architectural process but the design structure of the actual process; to draw as we design; to draw as we build. This analysis of the work of the Late Gothic architect Hernan Ruiz the Elder, from Cordoba, addresses two aspects: the historical and architectural investigation, and the graphical project for communication purposes.
Directory of Open Access Journals (Sweden)
Jinping Sun
2017-01-01
Full Text Available The multiple hypothesis tracker (MHT is currently the preferred method for addressing data association problem in multitarget tracking (MTT application. MHT seeks the most likely global hypothesis by enumerating all possible associations over time, which is equal to calculating maximum a posteriori (MAP estimate over the report data. Despite being a well-studied method, MHT remains challenging mostly because of the computational complexity of data association. In this paper, we describe an efficient method for solving the data association problem using graphical model approaches. The proposed method uses the graph representation to model the global hypothesis formation and subsequently applies an efficient message passing algorithm to obtain the MAP solution. Specifically, the graph representation of data association problem is formulated as a maximum weight independent set problem (MWISP, which translates the best global hypothesis formation into finding the maximum weight independent set on the graph. Then, a max-product belief propagation (MPBP inference algorithm is applied to seek the most likely global hypotheses with the purpose of avoiding a brute force hypothesis enumeration procedure. The simulation results show that the proposed MPBP-MHT method can achieve better tracking performance than other algorithms in challenging tracking situations.
Configuring a Graphical User Interface for Managing Local HYSPLIT Model Runs Through AWIPS
Wheeler, mark M.; Blottman, Peter F.; Sharp, David W.; Hoeth, Brian; VanSpeybroeck, Kurt M.
2009-01-01
Responding to incidents involving the release of harmful airborne pollutants is a continual challenge for Weather Forecast Offices in the National Weather Service. When such incidents occur, current protocol recommends forecaster-initiated requests of NOAA's Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model output through the National Centers of Environmental Prediction to obtain critical dispersion guidance. Individual requests are submitted manually through a secured web site, with desired multiple requests submitted in sequence, for the purpose of obtaining useful trajectory and concentration forecasts associated with the significant release of harmful chemical gases, radiation, wildfire smoke, etc., into local the atmosphere. To help manage the local HYSPLIT for both routine and emergency use, a graphical user interface was designed for operational efficiency. The interface allows forecasters to quickly determine the current HYSPLIT configuration for the list of predefined sites (e.g., fixed sites and floating sites), and to make any necessary adjustments to key parameters such as Input Model. Number of Forecast Hours, etc. When using the interface, forecasters will obtain desired output more confidently and without the danger of corrupting essential configuration files.
Modelling object typicality in description logics
CSIR Research Space (South Africa)
Britz, K
2009-12-01
Full Text Available The authors present a semantic model of typicality of concept members in description logics (DLs) that accords well with a binary, globalist cognitive model of class membership and typicality. The authors define a general preferential semantic...
A focused information criterion for graphical models in fMRI connectivity with high-dimensional data
Pircalabelu, E.; Claeskens, G.; Jahfari, S.; Waldorp, L.J.
2015-01-01
Connectivity in the brain is the most promising approach to explain human behavior. Here we develop a focused information criterion for graphical models to determine brain connectivity tailored to specific research questions. All efforts are concentrated on high-dimensional settings where the number
GRAPHIC ADVERTISING, SPECIALIZED COMMUNICATIONS MODEL THROUGH SYMBOLS, WORDS, IMAGES WORDS, IMAGES
Directory of Open Access Journals (Sweden)
ADRONACHI Maria
2011-06-01
Full Text Available The aim of the paper is to identify the graphic advertising components: symbol, text, colour, to illustrate how they cooperate in order to create the advertising message, and to analyze the corelation product – advertising – consumer.
GRAPHIC ADVERTISING, SPECIALIZED COMMUNICATIONS MODEL THROUGH SYMBOLS, WORDS, IMAGES WORDS, IMAGES
ADRONACHI Maria
2011-01-01
The aim of the paper is to identify the graphic advertising components: symbol, text, colour, to illustrate how they cooperate in order to create the advertising message, and to analyze the corelation product – advertising – consumer.
Medicalbiologicallaboratory as an object of modeling
Directory of Open Access Journals (Sweden)
Ольга Валентиновна Игумнова
2011-06-01
Full Text Available Medico-biological laboratories in Russian institutes of higher medical education do not support effectively the educational process. Searching of universal criteria and requirements to modeling of a virtual medico-biological laboratory is actual for medical education. The purpose of the article is to develop a conceptual model of a medico-biological experiment and principal approaches to realization of the model in a virtual medico-biological laboratory.
Desautel, Richard
1993-01-01
The objectives of this research include supporting the Aerothermodynamics Branch's research by developing graphical visualization tools for both the branch's adaptive grid code and flow field ray tracing code. The completed research for the reporting period includes development of a graphical user interface (GUI) and its implementation into the NAS Flowfield Analysis Software Tool kit (FAST), for both the adaptive grid code (SAGE) and the flow field ray tracing code (CISS).
Directory of Open Access Journals (Sweden)
Saito Shigeru
2007-01-01
Full Text Available Hepatocellular carcinoma (HCC in a liver with advanced-stage chronic hepatitis C (CHC is induced by hepatitis C virus, which chronically infects about 170 million people worldwide. To elucidate the associations between gene groups in hepatocellular carcinogenesis, we analyzed the profiles of the genes characteristically expressed in the CHC and HCC cell stages by a statistical method for inferring the network between gene systems based on the graphical Gaussian model. A systematic evaluation of the inferred network in terms of the biological knowledge revealed that the inferred network was strongly involved in the known gene-gene interactions with high significance , and that the clusters characterized by different cancer-related responses were associated with those of the gene groups related to metabolic pathways and morphological events. Although some relationships in the network remain to be interpreted, the analyses revealed a snapshot of the orchestrated expression of cancer-related groups and some pathways related with metabolisms and morphological events in hepatocellular carcinogenesis, and thus provide possible clues on the disease mechanism and insights that address the gap between molecular and clinical assessments.
Analysis of impact of general-purpose graphics processor units in supersonic flow modeling
Emelyanov, V. N.; Karpenko, A. G.; Kozelkov, A. S.; Teterina, I. V.; Volkov, K. N.; Yalozo, A. V.
2017-06-01
Computational methods are widely used in prediction of complex flowfields associated with off-normal situations in aerospace engineering. Modern graphics processing units (GPU) provide architectures and new programming models that enable to harness their large processing power and to design computational fluid dynamics (CFD) simulations at both high performance and low cost. Possibilities of the use of GPUs for the simulation of external and internal flows on unstructured meshes are discussed. The finite volume method is applied to solve three-dimensional unsteady compressible Euler and Navier-Stokes equations on unstructured meshes with high resolution numerical schemes. CUDA technology is used for programming implementation of parallel computational algorithms. Solutions of some benchmark test cases on GPUs are reported, and the results computed are compared with experimental and computational data. Approaches to optimization of the CFD code related to the use of different types of memory are considered. Speedup of solution on GPUs with respect to the solution on central processor unit (CPU) is compared. Performance measurements show that numerical schemes developed achieve 20-50 speedup on GPU hardware compared to CPU reference implementation. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.
Liu, Fang; Luehr, Nathan; Kulik, Heather J; Martínez, Todd J
2015-07-14
The conductor-like polarization model (C-PCM) with switching/Gaussian smooth discretization is a widely used implicit solvation model in chemical simulations. However, its application in quantum mechanical calculations of large-scale biomolecular systems can be limited by computational expense of both the gas phase electronic structure and the solvation interaction. We have previously used graphical processing units (GPUs) to accelerate the first of these steps. Here, we extend the use of GPUs to accelerate electronic structure calculations including C-PCM solvation. Implementation on the GPU leads to significant acceleration of the generation of the required integrals for C-PCM. We further propose two strategies to improve the solution of the required linear equations: a dynamic convergence threshold and a randomized block-Jacobi preconditioner. These strategies are not specific to GPUs and are expected to be beneficial for both CPU and GPU implementations. We benchmark the performance of the new implementation using over 20 small proteins in solvent environment. Using a single GPU, our method evaluates the C-PCM related integrals and their derivatives more than 10× faster than that with a conventional CPU-based implementation. Our improvements to the linear solver provide a further 3× acceleration. The overall calculations including C-PCM solvation require, typically, 20-40% more effort than that for their gas phase counterparts for a moderate basis set and molecule surface discretization level. The relative cost of the C-PCM solvation correction decreases as the basis sets and/or cavity radii increase. Therefore, description of solvation with this model should be routine. We also discuss applications to the study of the conformational landscape of an amyloid fibril.
DEFF Research Database (Denmark)
Breiting, Søren
2002-01-01
Introduktion til 'graphic review' som en metode til at føre forståelse fra en undervisngsgang til den næste i læreruddannelse og grundskole.......Introduktion til 'graphic review' som en metode til at føre forståelse fra en undervisngsgang til den næste i læreruddannelse og grundskole....
Glassner, Andrew S
1993-01-01
""The GRAPHICS GEMS Series"" was started in 1990 by Andrew Glassner. The vision and purpose of the Series was - and still is - to provide tips, techniques, and algorithms for graphics programmers. All of the gems are written by programmers who work in the field and are motivated by a common desire to share interesting ideas and tools with their colleagues. Each volume provides a new set of innovative solutions to a variety of programming problems.
Brook Weld Muller
2014-01-01
This essay describes strategic approaches to graphic representation associated with critical environmental engagement and that build from the idea of works of architecture as stitches in the ecological fabric of the city. It focuses on the building up of partial or fragmented graphics in order to describe inclusive, open-ended possibilities for making architecture that marry rich experience and responsive performance. An aphoristic approach to crafting drawings involves complex layering, cons...
DEFF Research Database (Denmark)
Bergstrøm-Nielsen, Carl
1992-01-01
Texbook to be used along with training the practise of graphic notation. Describes method; exercises; bibliography; collection of examples. If you can read Danish, please refer to that edition which is by far much more updated.......Texbook to be used along with training the practise of graphic notation. Describes method; exercises; bibliography; collection of examples. If you can read Danish, please refer to that edition which is by far much more updated....
International Nuclear Information System (INIS)
Horikoshi, Toru; Nagaseki, Yoshishige; Omata, Tomohiro; Ohashi, Yasuhiro; Asari, Yasuhiro; Nukui, Hideaki; Hirai, Tatsuo
1997-01-01
To aid in the determination of a tentative target of stereotactic thalamotomy for Parkinson disease instead of pneumoencephalogram, we developed a 3-D graphic system of the thalamus, including the target nuclei. This system is based on the Schaltenbrand and Bailey atlas, and consists of seven coronal contours of the thalamus and substructures. Even though the graph can be magnified or reduced to adjust to the parameters: intercommisural distance and width of the thalamus, there were still significant errors in cases with ventricular dilatation. To correct for these errors we introduced a new variable, width of the third ventricle, in the calculations. In this report we evaluate accuracy of the system by three ways. First, each graphic image was compared to the coincident coronal MR images of 13 normal subjects. Second, the graphic images were compared to coincident slices of two cadaver thalami. Furthermore, the location of electro-coagulation scars on horizontal MR images of seven patients who underwent stereotactic thalamotomy without using the system was also compared to retrospectively drawn graphic images. The mean errors of the graphics of normal subject were significantly reduced in the medial margin, while there was no error reduction in the upper and lateral margins. The contour of the thalamus was obliquely distorted in the cadavers with ventricular dilatation. The operative scars were located at the infero-lateral portion of the VL nucleus adjacent to the Vim nucleus of the graphic images in five patients, and in the neighboring VL nucleus in two. These results suggest that the present graphic system may be a useful tool for determining the target in stereotactic thalamotomy, as well as in gamma thalamotomy, after obtaining the effective correction for the distortion caused by ventricular dilatation. (author)
Image-Based Multiresolution Implicit Object Modeling
Directory of Open Access Journals (Sweden)
Sarti Augusto
2002-01-01
Full Text Available We discuss two image-based 3D modeling methods based on a multiresolution evolution of a volumetric function′s level set. In the former method, the role of the level set implosion is to fuse ("sew" and "stitch" together several partial reconstructions (depth maps into a closed model. In the later, the level set′s implosion is steered directly by the texture mismatch between views. Both solutions share the characteristic of operating in an adaptive multiresolution fashion, in order to boost up computational efficiency and robustness.
Numerical modeling of the motion of rigid ellipsoidal objects in slow viscous flows: A new approach
Jiang, Dazhi
2007-02-01
A simple algorithm for modeling the rotation of rigid ellipsoidal objects in viscous flows based on Jeffery's (1922, Proceedings of the Royal Society of London A102, 161-179) theory is presented and is implemented in a fully graphic mathematics application Mathcad ® ( http://www.mathsoft.com). The orientation of ellipsoidal objects is specified in terms of polar coordinate angles that can be easily converted to the trend and plunge angles of the three principal axes rather than the Euler angles. With the Mathcad worksheets presented in the supplementary data associated with this paper, modeling the rotation paths of individual rigid objects, the development of inclusion trail geometry within syn-kinematic porphyroblasts, and the development of preferred orientation and shape fabrics for a population of rigid objects becomes as easy a task as using a spreadsheet. The shape and preferred orientation fabrics for a population of rigid objects can be presented in both a three-dimensional form and a two-dimensional form, allowing easy comparison between field data and model predictions. The modeler can customize the type and format of the output to best fit the purpose of the investigation and to facilitate the comparison of model predictions with geological observations. Application examples are presented for various types of modeling involving rigid objects.
Model attraction in medical image object recognition
Tascini, Guido; Zingaretti, Primo
1995-04-01
This paper presents as new approach to image recognition based on a general attraction principle. A cognitive recognition is governed by a 'focus on attention' process that concentrates on the visual data subset of task- relevant type only. Our model-based approach combines it with another process, focus on attraction, which concentrates on the transformations of visual data having relevance for the matching. The recognition process is characterized by an intentional evolution of the visual data. This chain of image transformations is viewed as driven by an attraction field that attempts to reduce the distance between the image-point and the model-point in the feature space. The field sources are determined during a learning phase, by supplying the system with a training set. The paper describes a medical interpretation case in the feature space, concerning human skin lesions. The samples of the training set, supplied by the dermatologists, allow the system to learn models of lesions in terms of features such as hue factor, asymmetry factor, and asperity factor. The comparison of the visual data with the model derives the trend of image transformations, allowing a better definition of the given image and its classification. The algorithms are implemented in C language on a PC equipped with Matrox Image Series IM-1280 acquisition and processing boards. The work is now in progress.
Ward-Garrison, Christian; Markstrom, Steven L.; Hay, Lauren E.
2009-01-01
The U.S. Geological Survey Downsizer is a computer application that selects, downloads, verifies, and formats station-based time-series data for environmental-resource models, particularly the Precipitation-Runoff Modeling System. Downsizer implements the client-server software architecture. The client presents a map-based, graphical user interface that is intuitive to modelers; the server provides streamflow and climate time-series data from over 40,000 measurement stations across the United States. This report is the Downsizer user's manual and provides (1) an overview of the software design, (2) installation instructions, (3) a description of the graphical user interface, (4) a description of selected output files, and (5) troubleshooting information.
TU-D-209-03: Alignment of the Patient Graphic Model Using Fluoroscopic Images for Skin Dose Mapping
International Nuclear Information System (INIS)
Oines, A; Oines, A; Kilian-Meneghin, J; Karthikeyan, B; Rudin, S; Bednarek, D
2016-01-01
Purpose: The Dose Tracking System (DTS) was developed to provide realtime feedback of skin dose and dose rate during interventional fluoroscopic procedures. A color map on a 3D graphic of the patient represents the cumulative dose distribution on the skin. Automated image correlation algorithms are described which use the fluoroscopic procedure images to align and scale the patient graphic for more accurate dose mapping. Methods: Currently, the DTS employs manual patient graphic selection and alignment. To improve the accuracy of dose mapping and automate the software, various methods are explored to extract information about the beam location and patient morphology from the procedure images. To match patient anatomy with a reference projection image, preprocessing is first used, including edge enhancement, edge detection, and contour detection. Template matching algorithms from OpenCV are then employed to find the location of the beam. Once a match is found, the reference graphic is scaled and rotated to fit the patient, using image registration correlation functions in Matlab. The algorithm runs correlation functions for all points and maps all correlation confidences to a surface map. The highest point of correlation is used for alignment and scaling. The transformation data is saved for later model scaling. Results: Anatomic recognition is used to find matching features between model and image and image registration correlation provides for alignment and scaling at any rotation angle with less than onesecond runtime, and at noise levels in excess of 150% of those found in normal procedures. Conclusion: The algorithm provides the necessary scaling and alignment tools to improve the accuracy of dose distribution mapping on the patient graphic with the DTS. Partial support from NIH Grant R01-EB002873 and Toshiba Medical Systems Corp.
An object oriented implementation of the Yeadon human inertia model.
Dembia, Christopher; Moore, Jason K; Hubbard, Mont
2014-01-01
We present an open source software implementation of a popular mathematical method developed by M.R. Yeadon for calculating the body and segment inertia parameters of a human body. The software is written in a high level open source language and provides three interfaces for manipulating the data and the model: a Python API, a command-line user interface, and a graphical user interface. Thus the software can fit into various data processing pipelines and requires only simple geometrical measures as input.
Object interaction competence model v. 2.0
DEFF Research Database (Denmark)
Bennedsen, Jens; Schulte, C.
2013-01-01
Teaching and learning object oriented programming has to take into account the specific object oriented characteristics of program execution, namely the interaction of objects during runtime. Prior to the research reported in this article, we have developed a competence model for object interaction...
Javan, Ramin; Zeman, Merissa N
2018-02-01
In the context of medical three-dimensional (3D) printing, in addition to 3D reconstruction from cross-sectional imaging, graphic design plays a role in developing and/or enhancing 3D-printed models. A custom prototype modular 3D model of the liver was graphically designed depicting segmental anatomy of the parenchyma containing color-coded hepatic vasculature and biliary tree. Subsequently, 3D printing was performed using transparent resin for the surface of the liver and polyamide material to develop hollow internal structures that allow for passage of catheters and wires. A number of concepts were incorporated into the model. A representative mass with surrounding feeding arterial supply was embedded to demonstrate tumor embolization. A straight narrow hollow tract connecting the mass to the surface of the liver, displaying the path of a biopsy device's needle, and the concept of needle "throw" length was designed. A connection between the middle hepatic and right portal veins was created to demonstrate transjugular intrahepatic portosystemic shunt (TIPS) placement. A hollow amorphous structure representing an abscess was created to allow the demonstration of drainage catheter placement with the formation of pigtail tip. Percutaneous biliary drain and cholecystostomy tube placement were also represented. The skills of graphic designers may be utilized in creating highly customized 3D-printed models. A model was developed for the demonstration and simulation of multiple hepatobiliary interventions, for training purposes, patient counseling and consenting, and as a prototype for future development of a functioning interventional phantom.
Probabilistic graphical models to deal with age estimation of living persons.
Sironi, Emanuele; Gallidabino, Matteo; Weyermann, Céline; Taroni, Franco
2016-03-01
Due to the rise of criminal, civil and administrative judicial situations involving people lacking valid identity documents, age estimation of living persons has become an important operational procedure for numerous forensic and medicolegal services worldwide. The chronological age of a given person is generally estimated from the observed degree of maturity of some selected physical attributes by means of statistical methods. However, their application in the forensic framework suffers from some conceptual and practical drawbacks, as recently claimed in the specialised literature. The aim of this paper is therefore to offer an alternative solution for overcoming these limits, by reiterating the utility of a probabilistic Bayesian approach for age estimation. This approach allows one to deal in a transparent way with the uncertainty surrounding the age estimation process and to produce all the relevant information in the form of posterior probability distribution about the chronological age of the person under investigation. Furthermore, this probability distribution can also be used for evaluating in a coherent way the possibility that the examined individual is younger or older than a given legal age threshold having a particular legal interest. The main novelty introduced by this work is the development of a probabilistic graphical model, i.e. a Bayesian network, for dealing with the problem at hand. The use of this kind of probabilistic tool can significantly facilitate the application of the proposed methodology: examples are presented based on data related to the ossification status of the medial clavicular epiphysis. The reliability and the advantages of this probabilistic tool are presented and discussed.
The effectiveness of an interactive 3-dimensional computer graphics model for medical education.
Battulga, Bayanmunkh; Konishi, Takeshi; Tamura, Yoko; Moriguchi, Hiroki
2012-07-09
Medical students often have difficulty achieving a conceptual understanding of 3-dimensional (3D) anatomy, such as bone alignment, muscles, and complex movements, from 2-dimensional (2D) images. To this end, animated and interactive 3-dimensional computer graphics (3DCG) can provide better visual information to users. In medical fields, research on the advantages of 3DCG in medical education is relatively new. To determine the educational effectiveness of interactive 3DCG. We divided 100 participants (27 men, mean (SD) age 17.9 (0.6) years, and 73 women, mean (SD) age 18.1 (1.1) years) from the Health Sciences University of Mongolia (HSUM) into 3DCG (n = 50) and textbook-only (control) (n = 50) groups. The control group used a textbook and 2D images, while the 3DCG group was trained to use the interactive 3DCG shoulder model in addition to a textbook. We conducted a questionnaire survey via an encrypted satellite network between HSUM and Tokushima University. The questionnaire was scored on a 5-point Likert scale from strongly disagree (score 1) to strongly agree (score 5). Interactive 3DCG was effective in undergraduate medical education. Specifically, there was a significant difference in mean (SD) scores between the 3DCG and control groups in their response to questionnaire items regarding content (4.26 (0.69) vs 3.85 (0.68), P = .001) and teaching methods (4.33 (0.65) vs 3.74 (0.79), P < .001), but no significant difference in the Web category. Participants also provided meaningful comments on the advantages of interactive 3DCG. Interactive 3DCG materials have positive effects on medical education when properly integrated into conventional education. In particular, our results suggest that interactive 3DCG is more efficient than textbooks alone in medical education and can motivate students to understand complex anatomical structures.
A tri-objective, dynamic weapon assignment model for surface ...
African Journals Online (AJOL)
In this paper, a tri-objective, dynamic weapon assignment model is proposed by modelling the weapon assignment problem as a multi-objective variation of the celebrated vehicle routing problem with time windows. A multi-objective, evolutionary metaheuristic for solving the vehicle routing problem with time windows is ...
Model-Based Multi-Objective Reinforcement Learning
Wiering, Marco; Withagen, Maikel; Drugan, Madalina
2014-01-01
This paper describes a novel multi-objective reinforcement learning algorithm. The proposed algorithm first learns a model of the multi-objective sequential decision making problem, after which this learned model is used by a multi-objective dynamic programming method to compute Pareto op-timal
Kirk, David
1994-01-01
This sequel to Graphics Gems (Academic Press, 1990), and Graphics Gems II (Academic Press, 1991) is a practical collection of computer graphics programming tools and techniques. Graphics Gems III contains a larger percentage of gems related to modeling and rendering, particularly lighting and shading. This new edition also covers image processing, numerical and programming techniques, modeling and transformations, 2D and 3D geometry and algorithms,ray tracing and radiosity, rendering, and more clever new tools and tricks for graphics programming. Volume III also includes a
Julianto, E. A.; Suntoro, W. A.; Dewi, W. S.; Partoyo
2018-03-01
Climate change has been reported to exacerbate land resources degradation including soil fertility decline. The appropriate validity use on soil fertility evaluation could reduce the risk of climate change effect on plant cultivation. This study aims to assess the validity of a Soil Fertility Evaluation Model using a graphical approach. The models evaluated were the Indonesian Soil Research Center (PPT) version model, the FAO Unesco version model, and the Kyuma version model. Each model was then correlated with rice production (dry grain weight/GKP). The goodness of fit of each model can be tested to evaluate the quality and validity of a model, as well as the regression coefficient (R2). This research used the Eviews 9 programme by a graphical approach. The results obtained three curves, namely actual, fitted, and residual curves. If the actual and fitted curves are widely apart or irregular, this means that the quality of the model is not good, or there are many other factors that are still not included in the model (large residual) and conversely. Indeed, if the actual and fitted curves show exactly the same shape, it means that all factors have already been included in the model. Modification of the standard soil fertility evaluation models can improve the quality and validity of a model.
Improved parameter estimation for hydrological models using weighted object functions
Stein, A.; Zaadnoordijk, W.J.
1999-01-01
This paper discusses the sensitivity of calibration of hydrological model parameters to different objective functions. Several functions are defined with weights depending upon the hydrological background. These are compared with an objective function based upon kriging. Calibration is applied to
Field Model: An Object-Oriented Data Model for Fields
Moran, Patrick J.
2001-01-01
We present an extensible, object-oriented data model designed for field data entitled Field Model (FM). FM objects can represent a wide variety of fields, including fields of arbitrary dimension and node type. FM can also handle time-series data. FM achieves generality through carefully selected topological primitives and through an implementation that leverages the potential of templated C++. FM supports fields where the nodes values are paired with any cell type. Thus FM can represent data where the field nodes are paired with the vertices ("vertex-centered" data), fields where the nodes are paired with the D-dimensional cells in R(sup D) (often called "cell-centered" data), as well as fields where nodes are paired with edges or other cell types. FM is designed to effectively handle very large data sets; in particular FM employs a demand-driven evaluation strategy that works especially well with large field data. Finally, the interfaces developed for FM have the potential to effectively abstract field data based on adaptive meshes. We present initial results with a triangular adaptive grid in R(sup 2) and discuss how the same design abstractions would work equally well with other adaptive-grid variations, including meshes in R(sup 3).
Graphic displays on PCs of gaseous diffusion models of radionuclide releases to the atmosphere
International Nuclear Information System (INIS)
Campo Ortega, E. del
1993-01-01
The well-known MESOI program has been modified and improved to adapt it to a PC/AT with VGA colour monitor. Far from losing any of its powerful characteristics to calculate the transport, diffusion, deposition and decay of gaseous radioactive effluents discharged to the atmosphere, it has been enhanced to allow graphic viewing of concentrations, wind speed and direction and puff locations in colour, all on a background map of the site. The background covers a 75 x 75 km square and has a graphic grid density of 421 x 421 pixels. This means that effluent concentration is represented approximately every 170 metres in the 'clouded-area'. Among the modifications and enhancements made, the following are of particular interest: 1. A new subroutine called NUBE has been added, which calculates the distribution of effluent concentration of activity in a grid of 421 x 421 pixels. 2. Several subroutines have been added to obtain graphic displays and printouts of the cloud, wind field and puff locations. 3. Graphic display of the geographic plane of the area surrounding the effluent release point. 4. Off-line preparation of meteorological and topographical data files necessary for program execution. (author)
Directory of Open Access Journals (Sweden)
A. V. Krasnyuk
2008-03-01
Full Text Available Three-dimensional design possibilities of the AutoCAD system for performing graphic tasks are presented in the article. On the basis of the studies conducted the features of application of computer-aided design system are noted and the methods allowing to decrease considerably the quantity of errors at making the drawings are offered.
Directory of Open Access Journals (Sweden)
Brook Weld Muller
2014-12-01
Full Text Available This essay describes strategic approaches to graphic representation associated with critical environmental engagement and that build from the idea of works of architecture as stitches in the ecological fabric of the city. It focuses on the building up of partial or fragmented graphics in order to describe inclusive, open-ended possibilities for making architecture that marry rich experience and responsive performance. An aphoristic approach to crafting drawings involves complex layering, conscious absence and the embracing of tension. A self-critical attitude toward the generation of imagery characterized by the notion of ‘loose precision’ may lead to more transformative and environmentally responsive architectures.
2012-01-01
English Graphic is a book of essays on the subject of illustration, with the focus entirely on English artists using graphic media; drawings, prints and watercolours. As editor, I built on a schedule Tom drew up. It contains essays drawn from a variety of sources: the Great Works column, reviews, catalogue essays, and previously unpublished material. The historical span of the book is broad – from the Winchester Psalter Hellmouth to Harry Beck’s London Underground Map and Dom Sylvester Houéda...
Mai, Juliane; Cuntz, Matthias; Shafii, Mahyar; Zink, Matthias; Schäfer, David; Thober, Stephan; Samaniego, Luis; Tolson, Bryan
2016-04-01
Hydrologic models are traditionally calibrated against observed streamflow. Recent studies have shown however, that only a few global model parameters are constrained using this kind of integral signal. They can be identified using prior screening techniques. Since different objectives might constrain different parameters, it is advisable to use multiple information to calibrate those models. One common approach is to combine these multiple objectives (MO) into one single objective (SO) function and allow the use of a SO optimization algorithm. Another strategy is to consider the different objectives separately and apply a MO Pareto optimization algorithm. In this study, two major research questions will be addressed: 1) How do multi-objective calibrations compare with corresponding single-objective calibrations? 2) How much do calibration results deteriorate when the number of calibrated parameters is reduced by a prior screening technique? The hydrologic model employed in this study is a distributed hydrologic model (mHM) with 52 model parameters, i.e. transfer coefficients. The model uses grid cells as a primary hydrologic unit, and accounts for processes like snow accumulation and melting, soil moisture dynamics, infiltration, surface runoff, evapotranspiration, subsurface storage and discharge generation. The model is applied in three distinct catchments over Europe. The SO calibrations are performed using the Dynamically Dimensioned Search (DDS) algorithm with a fixed budget while the MO calibrations are achieved using the Pareto Dynamically Dimensioned Search (PA-DDS) algorithm allowing for the same budget. The two objectives used here are the Nash Sutcliffe Efficiency (NSE) of the simulated streamflow and the NSE of the logarithmic transformation. It is shown that the SO DDS results are located close to the edges of the Pareto fronts of the PA-DDS. The MO calibrations are hence preferable due to their supply of multiple equivalent solutions from which the
Ortiz, Luis E.
2015-01-01
Potential games, originally introduced in the early 1990's by Lloyd Shapley, the 2012 Nobel Laureate in Economics, and his colleague Dov Monderer, are a very important class of models in game theory. They have special properties such as the existence of Nash equilibria in pure strategies. This note introduces graphical versions of potential games. Special cases of graphical potential games have already found applicability in many areas of science and engineering beyond economics, including ar...
Implementation of object-oriented GIS data model with topological relations between spatial objects
Chen, Youliang; Wang, Zhaoru; Chen, Zhicheng
2013-03-01
Traditional GIS (Geographical Information System) data models are focused on the description of the data organizational structure and constraints from single aspect, which are the absence or little relevance of the hierarchy and connotation of data objects. These data models do not match the natural concept of humans about geographical spatial data and also failed to fully consider the spatial relationships and arithmetic operations with the relationship between geographic objects. Object-oriented technology mimics the human way of thinking as much as possible, it appears to overcome the shortcomings of traditional software design methods and improve the stability and reusability of the software systems. Based on the existing GIS data model, this paper introduces the basic idea of object-oriented GIS data model combined the object-oriented methods and the vectorial expression of spatial entities, and describes the definition and implementation of the object-oriented GIS data model. Furthermore, the topological operations between spatial objects have been defined in the paper for the importance of topological relationships among spatial relationships. The topological operations are directly defined in geometry classes by means of methods, and they include Touch, Disjoint, Cross, Contains, Overlaps, and Intersects and so on.
Lu, T W; O'Connor, J J
1996-01-01
A computer graphics-based model of the knee ligaments in the sagittal plane was developed for the simulation and visualization of the shape changes and fibre recruitment process of the ligaments during motion under unloaded and loaded conditions. The cruciate and collateral ligaments were modelled as ordered arrays of fibres which link attachment areas on the tibia and femur. Fibres slacken and tighten as the ligament attachment areas on the bones rotate and translate relative to each other. A four-bar linkage, composed of the femur, tibia and selected isometric fibres of the two cruciates, was used to determine the motion of the femur relative to the tibia during passive (unloaded) movement. Fibres were assumed to slacken in a Euler buckling mode when the distances between their attachments are less than chosen reference lengths. The ligament shape changes and buckling patterns are demonstrated with computer graphics. When the tibia is translated anteriorly or posteriorly relative to the femur by muscle forces and external loads, some ligament fibres tighten and are recruited progressively to transmit increasing shear forces. The shape changes and fibre recruitment patterns predicted by the model compare well qualitatively with experimental results reported in the literature. The computer graphics approach provides insight into the micro behaviour of the knee ligaments. It may help to explain ligament injury mechanisms and provide useful information to guide the design of ligament replacements.
Perception in statistical graphics
VanderPlas, Susan Ruth
There has been quite a bit of research on statistical graphics and visualization, generally focused on new types of graphics, new software to create graphics, interactivity, and usability studies. Our ability to interpret and use statistical graphics hinges on the interface between the graph itself and the brain that perceives and interprets it, and there is substantially less research on the interplay between graph, eye, brain, and mind than is sufficient to understand the nature of these relationships. The goal of the work presented here is to further explore the interplay between a static graph, the translation of that graph from paper to mental representation (the journey from eye to brain), and the mental processes that operate on that graph once it is transferred into memory (mind). Understanding the perception of statistical graphics should allow researchers to create more effective graphs which produce fewer distortions and viewer errors while reducing the cognitive load necessary to understand the information presented in the graph. Taken together, these experiments should lay a foundation for exploring the perception of statistical graphics. There has been considerable research into the accuracy of numerical judgments viewers make from graphs, and these studies are useful, but it is more effective to understand how errors in these judgments occur so that the root cause of the error can be addressed directly. Understanding how visual reasoning relates to the ability to make judgments from graphs allows us to tailor graphics to particular target audiences. In addition, understanding the hierarchy of salient features in statistical graphics allows us to clearly communicate the important message from data or statistical models by constructing graphics which are designed specifically for the perceptual system.
Proposal of fuzzy object oriented model in extended JAVA
Pereira, Wilmer
2006-01-01
The knowledge imperfections should be considered when modeling complex problems. A solution is to develop a model that reduces the complexity and another option is to represent the imperfections: uncertainty, vagueness and incompleteness in the knowledge base. This paper proposes to extend the classical object oriented architecture in order to allow modeling of problems with intrinsic imperfections. The aim is to use the JAVA object oriented architecture to carry out this objective. In conseq...
HEP graphics and visualization
AUTHOR|(CDS)2067145; CERN. Geneva
1992-03-24
The lectures will give an overview of the use of graphics in high-energy physics, i.e. for detector design, event representation and interactive analysis in 2D and 3D. An introduction to graphics packages (GKS, PHIGS, etc.) will be given, including discussion of the basic concepts of graphics programming. Emphasis is put on new ideas about graphical representation of events. Non-linear visualisation techniques, to improve the ease of understanding, will be described in detail. Physiological aspects, which play a role when using colours and when drawing mathematical objects like points and lines, are discussed. An analysis will be made of the power of graphics to represent very complex data in 2 and 3 dimensions, and the advantages of different representations will be compared.New techniques based on graphics are emerging today, such as multimedia or real-life pictures. Some are used in other domains of scientific research, as will be described and an overview of possible applications in our field will be give...
DEFF Research Database (Denmark)
Bergstrøm-Nielsen, Carl
2010-01-01
Graphic notation is taught to music therapy students at Aalborg University in both simple and elaborate forms. This is a method of depicting music visually, and notations may serve as memory aids, as aids for analysis and reflection, and for communication purposes such as supervision or within...
Viewpoints: a framework for object oriented database modelling and distribution
Directory of Open Access Journals (Sweden)
Fouzia Benchikha
2006-01-01
Full Text Available The viewpoint concept has received widespread attention recently. Its integration into a data model improves the flexibility of the conventional object-oriented data model and allows one to improve the modelling power of objects. The viewpoint paradigm can be used as a means of providing multiple descriptions of an object and as a means of mastering the complexity of current database systems enabling them to be developed in a distributed manner. The contribution of this paper is twofold: to define an object data model integrating viewpoints in databases and to present a federated database system integrating multiple sources following a local-as-extended-view approach.
Daston, Lorraine
2010-01-01
Objectivity has a history, and it is full of surprises. In Objectivity, Lorraine Daston and Peter Galison chart the emergence of objectivity in the mid-nineteenth-century sciences--and show how the concept differs from its alternatives, truth-to-nature and trained judgment. This is a story of lofty epistemic ideals fused with workaday practices in the making of scientific images. From the eighteenth through the early twenty-first centuries, the images that reveal the deepest commitments of the empirical sciences--from anatomy to crystallography--are those featured in scientific atlases, the compendia that teach practitioners what is worth looking at and how to look at it. Galison and Daston use atlas images to uncover a hidden history of scientific objectivity and its rivals. Whether an atlas maker idealizes an image to capture the essentials in the name of truth-to-nature or refuses to erase even the most incidental detail in the name of objectivity or highlights patterns in the name of trained judgment is a...
Grechkin, Maxim; Fazel, Maryam; Witten, Daniela; Lee, Su-In
2014-01-01
Graphical models provide a rich framework for summarizing the dependencies among variables. The graphical lasso approach attempts to learn the structure of a Gaussian graphical model (GGM) by maximizing the log likelihood of the data, subject to an l1 penalty on the elements of the inverse co-variance matrix. Most algorithms for solving the graphical lasso problem do not scale to a very large number of variables. Furthermore, the learned network structure is hard to interpret. To overcome these challenges, we propose a novel GGM structure learning method that exploits the fact that for many real-world problems we have prior knowledge that certain edges are unlikely to be present. For example, in gene regulatory networks, a pair of genes that does not participate together in any of the cellular processes, typically referred to as pathways, is less likely to be connected. In computer vision applications in which each variable corresponds to a pixel, each variable is likely to be connected to the nearby variables. In this paper, we propose the pathway graphical lasso, which learns the structure of a GGM subject to pathway-based constraints. In order to solve this problem, we decompose the network into smaller parts, and use a message-passing algorithm in order to communicate among the subnetworks. Our algorithm has orders of magnitude improvement in run time compared to the state-of-the-art optimization methods for the graphical lasso problem that were modified to handle pathway-based constraints. PMID:26167394
A General Polygon-based Deformable Model for Object Recognition
DEFF Research Database (Denmark)
Jensen, Rune Fisker; Carstensen, Jens Michael
1999-01-01
We propose a general scheme for object localization and recognition based on a deformable model. The model combines shape and image properties by warping a arbitrary prototype intensity template according to the deformation in shape. The shape deformations are constrained by a probabilistic...... distribution, which combined with a match of the warped intensity template and the image form the final criteria used for localization and recognition of a given object. The chosen representation gives the model an ability to model an almost arbitrary object. Beside the actual model a full general scheme...
Mathematical modeling of potentially hazardous nuclear objects with time shifts
International Nuclear Information System (INIS)
Gharakhanlou, J.; Kazachkov, I.V.
2012-01-01
The aggregate models for potentially hazardous objects with time shifts are used for mathematical modeling and computer simulation. The effects of time delays are time forecasts are analyzed. The influence of shift arguments on the nonlinear differential equations is discussed. Computer simulation has established the behavior of potentially hazardous nuclear object
Object Oriented Toolbox for Modelling and Simulation of Dynamic Systems
DEFF Research Database (Denmark)
Thomsen, Per Grove; Poulsen, Mikael Zebbelin; Wagner, Falko Jens
1999-01-01
Design and Implementation of a simulation toolbox based on Object Oriented modelling Techniques.Experimental implementation in C++ using the Godess ODE-solution platform.......Design and Implementation of a simulation toolbox based on Object Oriented modelling Techniques.Experimental implementation in C++ using the Godess ODE-solution platform....
Protein Nano-Object Integrator (ProNOI for generating atomic style objects for molecular modeling
Directory of Open Access Journals (Sweden)
Smith Nicholas
2012-12-01
Full Text Available Abstract Background With the progress of nanotechnology, one frequently has to model biological macromolecules simultaneously with nano-objects. However, the atomic structures of the nano objects are typically not available or they are solid state entities. Because of that, the researchers have to investigate such nano systems by generating models of the nano objects in a manner that the existing software be able to carry the simulations. In addition, it should allow generating composite objects with complex shape by combining basic geometrical figures and embedding biological macromolecules within the system. Results Here we report the Protein Nano-Object Integrator (ProNOI which allows for generating atomic-style geometrical objects with user desired shape and dimensions. Unlimited number of objects can be created and combined with biological macromolecules in Protein Data Bank (PDB format file. Once the objects are generated, the users can use sliders to manipulate their shape, dimension and absolute position. In addition, the software offers the option to charge the objects with either specified surface or volumetric charge density and to model them with user-desired dielectric constants. According to the user preference, the biological macromolecule atoms can be assigned charges and radii according to four different force fields: Amber, Charmm, OPLS and PARSE. The biological macromolecules and the atomic-style objects are exported as a position, charge and radius (PQR file, or if a default dielectric constant distribution is not selected, it is exported as a position, charge, radius and epsilon (PQRE file. As illustration of the capabilities of the ProNOI, we created a composite object in a shape of a robot, aptly named the Clemson Robot, whose parts are charged with various volumetric charge densities and holds the barnase-barstar protein complex in its hand. Conclusions The Protein Nano-Object Integrator (ProNOI is a convenient tool for
Chen, Yuanhao; Zhu, Long Leo; Yuille, Alan; Zhang, Hongjiang
2009-10-01
We present a method to learn probabilistic object models (POMs) with minimal supervision, which exploit different visual cues and perform tasks such as classification, segmentation, and recognition. We formulate this as a structure induction and learning task and our strategy is to learn and combine elementary POMs that make use of complementary image cues. We describe a novel structure induction procedure, which uses knowledge propagation to enable POMs to provide information to other POMs and "teach them" (which greatly reduces the amount of supervision required for training and speeds up the inference). In particular, we learn a POM-IP defined on Interest Points using weak supervision [1], [2] and use this to train a POM-mask, defined on regional features, which yields a combined POM that performs segmentation/localization. This combined model can be used to train POM-edgelets, defined on edgelets, which gives a full POM with improved performance on classification. We give detailed experimental analysis on large data sets for classification and segmentation with comparison to other methods. Inference takes five seconds while learning takes approximately four hours. In addition, we show that the full POM is invariant to scale and rotation of the object (for learning and inference) and can learn hybrid objects classes (i.e., when there are several objects and the identity of the object in each image is unknown). Finally, we show that POMs can be used to match between different objects of the same category, and hence, enable objects recognition.
Conceptual Modeling of Events as Information Objects and Change Agents
DEFF Research Database (Denmark)
Bækgaard, Lars
Traditionally, semantic data models have not supported the modeling of behavior. We present an event modeling approach that can be used to extend semantic data models like the entity-relationship model and the functional data model. We model an event as a two-sided phenomenon that is seen as a to...... it is comparable to an executable transaction schema. Finally, we briefly compare our approach to object-oriented approaches based on encapsulated objects.......Traditionally, semantic data models have not supported the modeling of behavior. We present an event modeling approach that can be used to extend semantic data models like the entity-relationship model and the functional data model. We model an event as a two-sided phenomenon that is seen...... as a totality of an information object and a change agent. When an event is modeled as an information object it is comparable to an entity that exists only at a specific point in time. It has attributes and can be used for querying and specification of constraints. When an event is modeled as a change agent...
Topological models and frameworks for 3D spatial objects
Zlatanova, Siyka; Rahman, Alias Abdul; Shi, Wenzhong
2004-05-01
Topology is one of the mechanisms to describe relationships between spatial objects. Thus, it is the basis for many spatial operations. Models utilizing the topological properties of spatial objects are usually called topological models, and are considered by many researchers as the best suited for complex spatial analysis (i.e., the shortest path search). A number of topological models for two-dimensional and 2.5D spatial objects have been implemented (or are under consideration) by GIS and DBMS vendors. However, when we move to one more dimension (i.e., three-dimensions), the complexity of the relationships increases, and this requires new approaches, rules and representations. This paper aims to give an overview of the 3D topological models presented in the literature, and to discuss generic issues related to 3D modeling. The paper also considers models in object-oriented (OO) environments. Finally, future trends for research and development in this area are highlighted.
A Deep-Structured Conditional Random Field Model for Object Silhouette Tracking.
Directory of Open Access Journals (Sweden)
Mohammad Javad Shafiee
Full Text Available In this work, we introduce a deep-structured conditional random field (DS-CRF model for the purpose of state-based object silhouette tracking. The proposed DS-CRF model consists of a series of state layers, where each state layer spatially characterizes the object silhouette at a particular point in time. The interactions between adjacent state layers are established by inter-layer connectivity dynamically determined based on inter-frame optical flow. By incorporate both spatial and temporal context in a dynamic fashion within such a deep-structured probabilistic graphical model, the proposed DS-CRF model allows us to develop a framework that can accurately and efficiently track object silhouettes that can change greatly over time, as well as under different situations such as occlusion and multiple targets within the scene. Experiment results using video surveillance datasets containing different scenarios such as occlusion and multiple targets showed that the proposed DS-CRF approach provides strong object silhouette tracking performance when compared to baseline methods such as mean-shift tracking, as well as state-of-the-art methods such as context tracking and boosted particle filtering.
Scattering center models of backscattering waves by dielectric spheroid objects.
Guo, Kun-Yi; Han, Xiao-Zhe; Sheng, Xin-Qing
2018-02-19
Scattering center models provide a simple and effective way of describing the complex electromagnetic scattering phenomena of targets and have been successfully applied in radar applications. However, the existing models are limited to conducting objects. Numerical results show that scattering centers of dielectric objects are far more complex than conducting objects and most of them are distributed beyond the object. For the lossless and low-loss media, the major scattering contributions to total fields are surface waves and multiple internal reflections rather than the direct reflection. Concise scattering center models for backscattering from dielectric spheroid objects are proposed in this work, which can characterize the backscattered waves by scattering centers with sparse and physical parameters. Good agreement has been demonstrated between the high resolution range profiles simulated by this model with those obtained by Mie series and the full wave numerical method.
A Bayesian alternative for multi-objective ecohydrological model specification
Tang, Yating; Marshall, Lucy; Sharma, Ashish; Ajami, Hoori
2018-01-01
Recent studies have identified the importance of vegetation processes in terrestrial hydrologic systems. Process-based ecohydrological models combine hydrological, physical, biochemical and ecological processes of the catchments, and as such are generally more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov chain Monte Carlo (MCMC) techniques. The Bayesian approach offers an appealing alternative to traditional multi-objective hydrologic model calibrations by defining proper prior distributions that can be considered analogous to the ad-hoc weighting often prescribed in multi-objective calibration. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological modeling framework based on a traditional Pareto-based model calibration technique. In our study, a Pareto-based multi-objective optimization and a formal Bayesian framework are implemented in a conceptual ecohydrological model that combines a hydrological model (HYMOD) and a modified Bucket Grassland Model (BGM). Simulations focused on one objective (streamflow/LAI) and multiple objectives (streamflow and LAI) with different emphasis defined via the prior distribution of the model error parameters. Results show more reliable outputs for both predicted streamflow and LAI using Bayesian multi-objective calibration with specified prior distributions for error parameters based on results from the Pareto front in the ecohydrological modeling. The methodology implemented here provides insight into the usefulness of multiobjective Bayesian calibration for ecohydrologic systems and the importance of appropriate prior
Modelling of elastic heat conductors via objective rate equations
Morro, Angelo
2018-01-01
A thermoelastic solid is modelled by letting the heat flux be given by a rate equation. As any constitutive property, the rate equation has to be objective and consistent with thermodynamics. Accordingly, firstly a theorem is given that characterizes objective time derivatives. This allows the known objective time derivatives to be viewed as particular elements of the set so specified. Next the thermodynamic consistency is established for the constitutive models involving objective time derivatives within appropriate sets. It emerges that the thermodynamic consistency holds provided the stress contains additively terms quadratic in the heat flux vector in a form that is related to the derivative adopted for the rate of the heat flux.
Phellan, Renzo; Falcão, Alexandre X; Udupa, Jayaram K
2016-01-01
Statistical object shape models (SOSMs), known as probabilistic atlases, are popular in medical image segmentation. They register an image into the atlas coordinate system, such that a desired object can be delineated from the constraints of its shape model. While this strategy facilitates segmenting objects with even weak-boundary contrast, it tends to require more models per object to cope with possible registration errors. Fuzzy object shape models (FOSMs) gain substantial speed by avoiding image registration and placing more relaxed model constraints with optimum object search. However, they tend to require stronger object boundary contrast for effective delineation. In this work, the authors show that optimum object search, the essential underpinning of FOSMs, can improve segmentation efficacy of SOSMs with fewer models per object. For the sake of efficiency, the authors use three atlases per object (SOSM-3) as baseline for segmentation based on the best match with posterior probability maps. A novel strategy for SOSM with a single atlas and optimum object search (SOSM-S) is presented. When registering an image to the atlas system, one should expect that the object's boundary falls within the uncertainty region of the model-region wherein voxels show probabilities greater than 0 and less than 1 to be in the object. Since registration may fail, SOSM-S translates the atlas locally and, at each location, delineates and scores a candidate object in the uncertainty region. Segmentation is defined by the candidate with the highest score. The presented FOSM also uses a single model per object, but model construction uses only shape translations, building a fuzzy object model with larger uncertainty region. Optimum object search requires estimation of the object's location and/or optimization algorithms to speed-up segmentation. The authors evaluate SOSM-3, SOSM-S, and FOSM on 75 CT-images of the thorax and 35 MR T1-weighted images of the brain, with nine objects of
Directory of Open Access Journals (Sweden)
Filippo Trentini
2015-03-01
Full Text Available Backgroung. In public health one debated issue is related to consequences of improper self-management in health care. Some theoretical models have been proposed in Health Communication theory which highlight how components such general literacy and specific knowledge of the disease might be very important for effective actions in healthcare system. Methods. This paper aims at investigating the consistency of Health Empowerment Model by means of both graphical models approach, which is a “data driven” method and a Structural Equation Modeling (SEM approach, which is instead “theory driven”, showing the different information pattern that can be revealed in a health care research context.The analyzed dataset provides data on the relationship between the Health Empowerment Model constructs and the behavioral and health status in 263 chronic low back pain (cLBP patients. We used the graphical models approach to evaluate the dependence structure in a “blind” way, thus learning the structure from the data.Results. From the estimation results dependence structure confirms links design assumed in SEM approach directly from researchers, thus validating the hypotheses which generated the Health Empowerment Model constructs.Conclusions. This models comparison helps in avoiding confirmation bias. In Structural Equation Modeling, we used SPSS AMOS 21 software. Graphical modeling algorithms were implemented in a R software environment.
Sound Synthesis of Objects Swinging through Air Using Physical Models
Directory of Open Access Journals (Sweden)
Rod Selfridge
2017-11-01
Full Text Available A real-time physically-derived sound synthesis model is presented that replicates the sounds generated as an object swings through the air. Equations obtained from fluid dynamics are used to determine the sounds generated while exposing practical parameters for a user or game engine to vary. Listening tests reveal that for the majority of objects modelled, participants rated the sounds from our model as plausible as actual recordings. The sword sound effect performed worse than others, and it is speculated that one cause may be linked to the difference between expectations of a sound and the actual sound for a given object.
Computer graphics and research projects
International Nuclear Information System (INIS)
Ingtrakul, P.
1994-01-01
This report was prepared as an account of scientific visualization tools and application tools for scientists and engineers. It is provided a set of tools to create pictures and to interact with them in natural ways. It applied many techniques of computer graphics and computer animation through a number of full-color presentations as computer animated commercials, 3D computer graphics, dynamic and environmental simulations, scientific modeling and visualization, physically based modelling, and beavioral, skelatal, dynamics, and particle animation. It took in depth at original hardware and limitations of existing PC graphics adapters contain syste m performance, especially with graphics intensive application programs and user interfaces
DEGAS : a temporal active data model based on object autonomy
J.F.P. van den Akker; A.P.J.M. Siebes (Arno)
1996-01-01
textabstractThis report defines DEGAS, an advanced active data model that is novel in two ways. The first innovation is object autonomy, an extreme form of distributed control. In comparison to more traditional approaches, autonomous objects also encapsulate rule definitions to make them active. The
Directory of Open Access Journals (Sweden)
A.N. Khomchenko
2016-08-01
Full Text Available The paper considers the problem of bi-cubic interpolation on the final element of serendipity family. With cognitive-graphical analysis the rigid model of Ergatoudis, Irons and Zenkevich (1968 compared with alternative models, obtained by the methods: direct geometric design, a weighted averaging of the basis polynomials, systematic generation of bases (advanced Taylor procedure. The emphasis is placed on the phenomenon of "gravitational repulsion" (Zenkevich paradox. The causes of rising of inadequate physical spectra nodal loads on serendipity elements of higher orders are investigated. Soft modeling allows us to build a lot of serendipity elements of bicubic interpolation, and you do not even need to know the exact form of the rigid model. The different interpretations of integral characteristics of the basis polynomials: geometrical, physical, probability are offered. Under the soft model in the theory of interpolation of function of two variables implies the model amenable to change through the choice of basis. Such changes in the family of Lagrangian finite elements of higher orders are excluded (hard simulation. Standard models of serendipity family (Zenkevich were also tough. It was found that the "responsibility" for the rigidity of serendipity model rests on ruled surfaces (zero Gaussian curvature - conoids that predominate in the base set. Cognitive portraits zero lines of standard serendipity surfaces suggested that in order to "mitigate" of serendipity pattern conoid should better be replaced by surfaces of alternating Gaussian curvature. The article shows the alternative (soft bases of serendipity models. The work is devoted to solving scientific and technological problems aimed at the creation, dissemination and use of cognitive computer graphics in teaching and learning. The results are of interest to students of specialties: "Computer Science and Information Technologies", "System Analysis", "Software Engineering", as well as
The Game Object Model and expansive learning: Creation ...
African Journals Online (AJOL)
representation as research instrument of the Game Object Model (GOM) are explored from a Cultural Historical Activity Theory perspective. The aim of the paper is to develop insights into the design, integration, evaluation and use of video games ...
Remedial action graphics management system
International Nuclear Information System (INIS)
Madson, M.E.
1987-01-01
The objective of the Graphics Management System is to provide a visual display of the Grand Junction vicinities properties status as a cost effective management tool. Capabilities of the system are listed and a series of sample displays are presented
A PDP model of the simultaneous perception of multiple objects
Henderson, Cynthia M.; McClelland, James L.
2011-06-01
Illusory conjunctions in normal and simultanagnosic subjects are two instances where the visual features of multiple objects are incorrectly 'bound' together. A connectionist model explores how multiple objects could be perceived correctly in normal subjects given sufficient time, but could give rise to illusory conjunctions with damage or time pressure. In this model, perception of two objects benefits from lateral connections between hidden layers modelling aspects of the ventral and dorsal visual pathways. As with simultanagnosia, simulations of dorsal lesions impair multi-object recognition. In contrast, a large ventral lesion has minimal effect on dorsal functioning, akin to dissociations between simple object manipulation (retained in visual form agnosia and semantic dementia) and object discrimination (impaired in these disorders) [Hodges, J.R., Bozeat, S., Lambon Ralph, M.A., Patterson, K., and Spatt, J. (2000), 'The Role of Conceptual Knowledge: Evidence from Semantic Dementia', Brain, 123, 1913-1925; Milner, A.D., and Goodale, M.A. (2006), The Visual Brain in Action (2nd ed.), New York: Oxford]. It is hoped that the functioning of this model might suggest potential processes underlying dorsal and ventral contributions to the correct perception of multiple objects.
Trends in Continuity and Interpolation for Computer Graphics.
Gonzalez Garcia, Francisco
2015-01-01
In every computer graphics oriented application today, it is a common practice to texture 3D models as a way to obtain realistic material. As part of this process, mesh texturing, deformation, and visualization are all key parts of the computer graphics field. This PhD dissertation was completed in the context of these three important and related fields in computer graphics. The article presents techniques that improve on existing state-of-the-art approaches related to continuity and interpolation in texture space (texturing), object space (deformation), and screen space (rendering).
Connell, Ellery
2011-01-01
Helping graphic designers expand their 2D skills into the 3D space The trend in graphic design is towards 3D, with the demand for motion graphics, animation, photorealism, and interactivity rapidly increasing. And with the meteoric rise of iPads, smartphones, and other interactive devices, the design landscape is changing faster than ever.2D digital artists who need a quick and efficient way to join this brave new world will want 3D for Graphic Designers. Readers get hands-on basic training in working in the 3D space, including product design, industrial design and visualization, modeling, ani
Model-Based Software Testing for Object-Oriented Software
Biju, Soly Mathew
2008-01-01
Model-based testing is one of the best solutions for testing object-oriented software. It has a better test coverage than other testing styles. Model-based testing takes into consideration behavioural aspects of a class, which are usually unchecked in other testing methods. An increase in the complexity of software has forced the software industry…
Archive Design Based on Planets Inspired Logical Object Model
DEFF Research Database (Denmark)
Zierau, Eld; Johansen, Anders
2008-01-01
We describe a proposal for a logical data model based on preliminary work the Planets project In OAIS terms the main areas discussed are related to the introduction of a logical data model for representing the past, present and future versions of the digital object associated with the Archival...
GOOSE Version 1.4: A powerful object-oriented simulation environment for developing reactor models
International Nuclear Information System (INIS)
Nypaver, D.J.; March-Leuba, C.; Abdalla, M.A.; Guimaraes, L.
1992-01-01
A prototype software package for a fully interactive Generalized Object-Oriented Simulation Environment (GOOSE) is being developed at Oak Ridge National Laboratory. Dynamic models are easily constructed and tested; fully interactive capabilities allow the user to alter model parameters and complexity without recompilation. This environment provides assess to powerful tools such as numerical integration packages, graphical displays, and online help. In GOOSE, portability has been achieved by creating the environment in Objective-C 1 , which is supported by a variety of platforms including UNIX and DOS. GOOSE Version 1.4 introduces new enhancements like the capability of creating ''initial,'' ''dynamic,'' and ''digital'' methods. The object-oriented approach to simulation used in GOOSE combines the concept of modularity with the additional features of allowing precompilation, optimization, testing, and validation of individual modules. Once a library of classes has been defined and compiled, models can be built and modified without recompilation. GOOSE Version 1.4 is primarily command-line driven
Constructing Multidatabase Collections Using Extended ODMG Object Model
Directory of Open Access Journals (Sweden)
Adrian Skehill Mark Roantree
1999-11-01
Full Text Available Collections are an important feature in database systems. They provide us with the ability to group objects of interest together, and then to manipulate them in the required fashion. The OASIS project is focused on the construction a multidatabase prototype which uses the ODMG model and a canonical model. As part of this work we have extended the base model to provide a more powerful collection mechanism, and to permit the construction of a federated collection, a collection of heterogenous objects taken from distributed data sources
The Aalborg Model and management by objectives and resources
DEFF Research Database (Denmark)
Qvist, Palle; Spliid, Claus Monrad
2010-01-01
it is observed that the allocation of resources to the students in the Aalborg Model differs to the allocation in a more conventional model often used in HEIs. Students in the Aalborg Model are supported with resources which makes a difference. This article focuses on the introduction of project management...... Model is successful has never been subject to a scientific study. An educational program in an HEI (Higher Education Institution) can be seen and understood as a system managed by objectives (MBO)5 within a given resource frame and based on an “agreement” between the student and the study board....... The student must achieve the objectives decided by the study board and that achievement is then documented with an exam. The study board supports the student with resources which helps them to fulfill the objectives. When the resources are divided into human, material and methodological resources...
Modeling and Querying Moving Objects with Social Relationships
Directory of Open Access Journals (Sweden)
Hengcai Zhang
2016-07-01
Full Text Available Current moving-object database (MOD systems focus on management of movement data, but pay less attention to modelling social relationships between moving objects and spatial-temporal trajectories in an integrated manner. This paper combines moving-object database and social network systems and presents a novel data model called Geo-Social-Moving (GSM that enables the unified management of trajectories, underlying geographical space and social relationships for mass moving objects. A bulk of user-defined data types and corresponding operators are also proposed to facilitate geo-social queries on moving objects. An implementation framework for the GSM model is proposed, and a prototype system based on native Neo4J is then developed with two real-world data sets from the location-based social network systems. Compared with solutions based on traditional extended relational database management systems characterized by time-consuming table join operations, the proposed GSM model characterized by graph traversal is argued to be more powerful in representing mass moving objects with social relationships, and more efficient and stable for geo-social querying.
Chang, C.; Li, M.; Yeh, G.
2010-12-01
The BIOGEOCHEM numerical model (Yeh and Fang, 2002; Fang et al., 2003) was developed with FORTRAN for simulating reaction-based geochemical and biochemical processes with mixed equilibrium and kinetic reactions in batch systems. A complete suite of reactions including aqueous complexation, adsorption/desorption, ion-exchange, redox, precipitation/dissolution, acid-base reactions, and microbial mediated reactions were embodied in this unique modeling tool. Any reaction can be treated as fast/equilibrium or slow/kinetic reaction. An equilibrium reaction is modeled with an implicit finite rate governed by a mass action equilibrium equation or by a user-specified algebraic equation. A kinetic reaction is modeled with an explicit finite rate with an elementary rate, microbial mediated enzymatic kinetics, or a user-specified rate equation. None of the existing models has encompassed this wide array of scopes. To ease the input/output learning curve using the unique feature of BIOGEOCHEM, an interactive graphic user interface was developed with the Microsoft Visual Studio and .Net tools. Several user-friendly features, such as pop-up help windows, typo warning messages, and on-screen input hints, were implemented, which are robust. All input data can be real-time viewed and automated to conform with the input file format of BIOGEOCHEM. A post-processor for graphic visualizations of simulated results was also embedded for immediate demonstrations. By following data input windows step by step, errorless BIOGEOCHEM input files can be created even if users have little prior experiences in FORTRAN. With this user-friendly interface, the time effort to conduct simulations with BIOGEOCHEM can be greatly reduced.
Managing and learning with multiple models: Objectives and optimization algorithms
Probert, William J. M.; Hauser, C.E.; McDonald-Madden, E.; Runge, M.C.; Baxter, P.W.J.; Possingham, H.P.
2011-01-01
The quality of environmental decisions should be gauged according to managers' objectives. Management objectives generally seek to maximize quantifiable measures of system benefit, for instance population growth rate. Reaching these goals often requires a certain degree of learning about the system. Learning can occur by using management action in combination with a monitoring system. Furthermore, actions can be chosen strategically to obtain specific kinds of information. Formal decision making tools can choose actions to favor such learning in two ways: implicitly via the optimization algorithm that is used when there is a management objective (for instance, when using adaptive management), or explicitly by quantifying knowledge and using it as the fundamental project objective, an approach new to conservation.This paper outlines three conservation project objectives - a pure management objective, a pure learning objective, and an objective that is a weighted mixture of these two. We use eight optimization algorithms to choose actions that meet project objectives and illustrate them in a simulated conservation project. The algorithms provide a taxonomy of decision making tools in conservation management when there is uncertainty surrounding competing models of system function. The algorithms build upon each other such that their differences are highlighted and practitioners may see where their decision making tools can be improved. ?? 2010 Elsevier Ltd.
Scale Problems in Geometric-Kinematic Modelling of Geological Objects
Siehl, Agemar; Thomsen, Andreas
To reveal, to render and to handle complex geological objects and their history of structural development, appropriate geometric models have to be designed. Geological maps, sections, sketches of strain and stress patterns are such well-known analogous two-dimensional models. Normally, the set of observations and measurements supporting them is small in relation to the complexity of the real objects they derive from. Therefore, modelling needs guidance by additional expert knowledge to bridge empty spaces which are not supported by data. Generating digital models of geological objects has some substantial advantages compared to conventional methods, especially if they are supported by an efficient database management system. Consistent 3D models of some complexity can be created, and experiments with time-dependent geological geometries may help to restore coherent sequences of paleogeological states. In order to cope with the problems arising from the combined usage of 3D-geometry models of different scale and resolution within an information system on subsurface geology, geometrical objects need to be annotated with information on the context, within which the geometry model has been established and within which it is valid, and methods supporting storage and retrieval as well as manipulation of geometry at different scales must also take into account and handle such context information to achieve meaningful results. An example is given of a detailed structural study of an open pit lignite mine in the Lower Rhine Basin.
Identifying Objective and Subjective Words via Topic Modeling.
Wang, Hanqi; Wu, Fei; Lu, Weiming; Yang, Yi; Li, Xi; Li, Xuelong; Zhuang, Yueting
2018-03-01
It is observed that distinct words in a given document have either strong or weak ability in delivering facts (i.e., the objective sense) or expressing opinions (i.e., the subjective sense) depending on the topics they associate with. Motivated by the intuitive assumption that different words have varying degree of discriminative power in delivering the objective sense or the subjective sense with respect to their assigned topics, a model named as dentified bjective- ubjective latent Dirichlet allocation (LDA) ( osLDA) is proposed in this paper. In the osLDA model, the simple Pólya urn model adopted in traditional topic models is modified by incorporating it with a probabilistic generative process, in which the novel "Bag-of-Discriminative-Words" (BoDW) representation for the documents is obtained; each document has two different BoDW representations with regard to objective and subjective senses, respectively, which are employed in the joint objective and subjective classification instead of the traditional Bag-of-Topics representation. The experiments reported on documents and images demonstrate that: 1) the BoDW representation is more predictive than the traditional ones; 2) osLDA boosts the performance of topic modeling via the joint discovery of latent topics and the different objective and subjective power hidden in every word; and 3) osLDA has lower computational complexity than supervised LDA, especially under an increasing number of topics.
Marvel, Skylar W; To, Kimberly; Grimm, Fabian A; Wright, Fred A; Rusyn, Ivan; Reif, David M
2018-03-05
Drawing integrated conclusions from diverse source data requires synthesis across multiple types of information. The ToxPi (Toxicological Prioritization Index) is an analytical framework that was developed to enable integration of multiple sources of evidence by transforming data into integrated, visual profiles. Methodological improvements have advanced ToxPi and expanded its applicability, necessitating a new, consolidated software platform to provide functionality, while preserving flexibility for future updates. We detail the implementation of a new graphical user interface for ToxPi (Toxicological Prioritization Index) that provides interactive visualization, analysis, reporting, and portability. The interface is deployed as a stand-alone, platform-independent Java application, with a modular design to accommodate inclusion of future analytics. The new ToxPi interface introduces several features, from flexible data import formats (including legacy formats that permit backward compatibility) to similarity-based clustering to options for high-resolution graphical output. We present the new ToxPi interface for dynamic exploration, visualization, and sharing of integrated data models. The ToxPi interface is freely-available as a single compressed download that includes the main Java executable, all libraries, example data files, and a complete user manual from http://toxpi.org .
Turbulence and Self-Organization Modeling Astrophysical Objects
Marov, Mikhail Ya
2013-01-01
This book focuses on the development of continuum models of natural turbulent media. It provides a theoretical approach to the solutions of different problems related to the formation, structure and evolution of astrophysical and geophysical objects. A stochastic modeling approach is used in the mathematical treatment of these problems, which reflects self-organization processes in open dissipative systems. The authors also consider examples of ordering for various objects in space throughout their evolutionary processes. This volume is aimed at graduate students and researchers in the fields of mechanics, astrophysics, geophysics, planetary and space science.
An Empirical Study of Efficiency and Accuracy of Probabilistic Graphical Models
DEFF Research Database (Denmark)
Nielsen, Jens Dalgaard; Jaeger, Manfred
2006-01-01
In this paper we compare Na\\ii ve Bayes (NB) models, general Bayes Net (BN) models and Probabilistic Decision Graph (PDG) models w.r.t. accuracy and efficiency. As the basis for our analysis we use graphs of size vs. likelihood that show the theoretical capabilities of the models. We also measure...
Object-Oriented Approach to Modeling Units of Pneumatic Systems
Directory of Open Access Journals (Sweden)
Yu. V. Kyurdzhiev
2014-01-01
Full Text Available The article shows the relevance of the approaches to the object-oriented programming when modeling the pneumatic units (PU.Based on the analysis of the calculation schemes of aggregates pneumatic systems two basic objects, namely a cavity flow and a material point were highlighted.Basic interactions of objects are defined. Cavity-cavity interaction: ex-change of matter and energy with the flows of mass. Cavity-point interaction: force interaction, exchange of energy in the form of operation. Point-point in-teraction: force interaction, elastic interaction, inelastic interaction, and inter-vals of displacement.The authors have developed mathematical models of basic objects and interactions. Models and interaction of elements are implemented in the object-oriented programming.Mathematical models of elements of PU design scheme are implemented in derived from the base class. These classes implement the models of flow cavity, piston, diaphragm, short channel, diaphragm to be open by a given law, spring, bellows, elastic collision, inelastic collision, friction, PU stages with a limited movement, etc.A numerical integration of differential equations for the mathematical models of PU design scheme elements is based on the Runge-Kutta method of the fourth order. On request each class performs a tact of integration i.e. calcu-lation of the coefficient method.The paper presents an integration algorithm of the system of differential equations. All objects of the PU design scheme are placed in a unidirectional class list. Iterator loop cycle initiates the integration tact of all the objects in the list. One in four iteration makes a transition to the next step of integration. Calculation process stops when any object shows a shutdowns flag.The proposed approach was tested in the calculation of a number of PU designs. With regard to traditional approaches to modeling, the authors-proposed method features in easy enhancement, code reuse, high reliability
Choosing an optimal model for failure data analysis by graphical approach
International Nuclear Information System (INIS)
Zhang, Tieling; Dwight, Richard
2013-01-01
Many models involving combination of multiple Weibull distributions, modification of Weibull distribution or extension of its modified ones, etc. have been developed to model a given set of failure data. The application of these models to modeling a given data set can be based on plotting the data on Weibull probability paper (WPP). Of them, two or more models are appropriate to model one typical shape of the fitting plot, whereas a specific model may be fit for analyzing different shapes of the plots. Hence, a problem arises, that is how to choose an optimal model for a given data set and how to model the data. The motivation of this paper is to address this issue. This paper summarizes the characteristics of Weibull-related models with more than three parameters including sectional models involving two or three Weibull distributions, competing risk model and mixed Weibull model. The models as discussed in this present paper are appropriate to model the data of which the shapes of plots on WPP can be concave, convex, S-shaped or inversely S-shaped. Then, the method for model selection is proposed, which is based on the shapes of the fitting plots. The main procedure for parameter estimation of the models is described accordingly. In addition, the range of data plots on WPP is clearly highlighted from the practical point of view. To note this is important as mathematical analysis of a model with neglecting the applicable range of the model plot will incur discrepancy or big errors in model selection and parameter estimates
Object Oriented Business Process Modelling in RFID Applied Computing Environments
Zhao, Xiaohui; Liu, Chengfei; Lin, Tao
As a tracking technology, Radio Frequency Identification (RFID) is now widely applied to enhance the context awareness of enterprise information systems. Such awareness provides great opportunities to facilitate business process automation and thereby improve operation efficiency and accuracy. With the aim to incorporate business logics into RFID-enabled applications, this book chapter addresses how RFID technologies impact current business process management and the characteristics of object-oriented business process modelling. This chapter first discusses the rationality and advantages of applying object-oriented process modelling in RFID applications, then addresses the requirements and guidelines for RFID data management and process modelling. Two typical solutions are introduced to further illustrate the modelling and incorporation of business logics/business processes into RFID edge systems. To demonstrate the applicability of these two approaches, a detailed case study is conducted within a distribution centre scenario.
C++, objected-oriented programming, and astronomical data models
Farris, A.
1992-01-01
Contemporary astronomy is characterized by increasingly complex instruments and observational techniques, higher data collection rates, and large data archives, placing severe stress on software analysis systems. The object-oriented paradigm represents a significant new approach to software design and implementation that holds great promise for dealing with this increased complexity. The basic concepts of this approach will be characterized in contrast to more traditional procedure-oriented approaches. The fundamental features of objected-oriented programming will be discussed from a C++ programming language perspective, using examples familiar to astronomers. This discussion will focus on objects, classes and their relevance to the data type system; the principle of information hiding; and the use of inheritance to implement generalization/specialization relationships. Drawing on the object-oriented approach, features of a new database model to support astronomical data analysis will be presented.
Generalized Sparselet Models for Real-Time Multiclass Object Recognition.
Song, Hyun Oh; Girshick, Ross; Zickler, Stefan; Geyer, Christopher; Felzenszwalb, Pedro; Darrell, Trevor
2015-05-01
The problem of real-time multiclass object recognition is of great practical importance in object recognition. In this paper, we describe a framework that simultaneously utilizes shared representation, reconstruction sparsity, and parallelism to enable real-time multiclass object detection with deformable part models at 5Hz on a laptop computer with almost no decrease in task performance. Our framework is trained in the standard structured output prediction formulation and is generically applicable for speeding up object recognition systems where the computational bottleneck is in multiclass, multi-convolutional inference. We experimentally demonstrate the efficiency and task performance of our method on PASCAL VOC, subset of ImageNet, Caltech101 and Caltech256 dataset.
Comparison of Three Approximate Kinematic Models for Space Object Tracking
2013-07-01
Object Tracking 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT...the filtering when the overall position error is above 400m . As more measurements are fused, the most accurate KPS model does achieve significantly
Model for setting priority construction project objectives aligned with ...
African Journals Online (AJOL)
A comprehensive model based on priority project objectives aligned with monetary incentives, and agreed upon by built environment stakeholders was developed. A web survey was adopted to send out a questionnaire to nationwide participants, including contractors, quantity surveyors, project managers, architects, and ...
A Proto-Object-Based Computational Model for Visual Saliency
Yanulevskaya, V.; Uijlings, J.; Geusebroek, J.-M.; Sebe, N.; Smeulders, A.
2013-01-01
State-of-the-art bottom-up saliency models often assign high saliency values at or near high-contrast edges, whereas people tend to look within the regions delineated by those edges, namely the objects. To resolve this inconsistency, in this work we estimate saliency at the level of coherent image
Object Oriented Toolbox for Modelling and Simulation of Dynamical Systems
DEFF Research Database (Denmark)
Poulsen, Mikael Zebbelin; Wagner, Falko Jens; Thomsen, Per Grove
1998-01-01
This paper presents the results of an ongoing project, dealing with design and implementation of a simulation toolbox based on object oriented modelling techniques. The paper describes an experimental implementation of parts of such a toolbox in C++, and discusses the experiences drawn from...
Development of a cultural heritage object BIM model
Braila, Natalya; Vakhrusheva, Svetlana; Martynenko, Elena; Kisel, Tatyana
2017-10-01
The BIM technology during her creation has been aimed, first of all, at design and construction branch, but its application in the field of studying and operation of architectural heritage can essentially change and transfer this kind of activity to new qualitative level. The question of effective introduction of BIM technologies at the solution of administrative questions of operation and development of monuments of architecture is considered in article. Creation of the information model of the building object of cultural heritage including a full complex of information on an object is offered: historical and archival, legal, technical, administrative, etc. The 3D model of an object of cultural heritage with color marking of elements on degree of wear and a first priority of carrying out repair will become one of components of model. This model will allow to estimate visually technical condition of the building in general and to gain general idea about scales of necessary repair and construction actions that promotes improvement of quality of operation of an object, and also simplifies and accelerates processing of information and in need of a memorial building assessment as subject to investment.
Srinivasan, H
1984-09-01
Graphic representations of the spectrum concept of leprosy are examined in some detail as models for this disease. This reveals that this concept is somewhat inadequate and that the spectrum metaphor may itself be inappropriate because, by its very linearity of logic, it may not be able to depict the nonlinear behavior of leprosy properly. The assumptions underlying this concept and their logical consequences, brought out by the graphic representations, include an invariable relation between CMI and BI, identity of one type of leprosy with one specific level of CMI, a fixed sequence of types, and the consequent impossibility of skipping the sequence. However, our experience with leprosy does not bear out these assumptions. Further, development and progress of leprosy from a normal (nonleprous) state cannot be represented in these models. A search for alternative conceptual models therefore appears reasonable and even necessary. The catastrophe theory (a branch of topology in mathematics) describes a number of models for explaining how continuous causes could produce sudden or discontinuous changes. Of the various catastrophe theory models available, the relatively simple "cusp" model appears capable of application to leprosy. This model, as applied here, requires two control factors (identified tentatively as the amount of dead bacilli and the amount of living bacilli or their indicators) and one pattern of behavior, identified as progress towards limited or extensive disease. This model suggests under what conditions leprosy will change from one type to another and whether that will happen gradually or suddenly. It also suggests that for certain values of control factors the disease may manifest in one of two forms of borderline leprosy, and that lesions very similar to start with can progress to quite different states under similar conditions of change. The behavior of leprosy agrees more or less with that suggested by this model. The cusp model thus seems to: a
Development of the Object-Oriented Dynamic Simulation Models Using Visual C++ Freeware
Directory of Open Access Journals (Sweden)
Alexander I. Kozynchenko
2016-01-01
Full Text Available The paper mostly focuses on the methodological and programming aspects of developing a versatile desktop framework to provide the available basis for the high-performance simulation of dynamical models of different kinds and for diverse applications. So the paper gives some basic structure for creating a dynamical simulation model in C++ which is built on the Win32 platform with an interactive multiwindow interface and uses the lightweight Visual C++ Express as a free integrated development environment. The resultant simulation framework could be a more acceptable alternative to other solutions developed on the basis of commercial tools like Borland C++ or Visual C++ Professional, not to mention the domain specific languages and more specialized ready-made software such as Matlab, Simulink, and Modelica. This approach seems to be justified in the case of complex research object-oriented dynamical models having nonstandard structure, relationships, algorithms, and solvers, as it allows developing solutions of high flexibility. The essence of the model framework is shown using a case study of simulation of moving charged particles in the electrostatic field. The simulation model possesses the necessary visualization and control features such as an interactive input, real time graphical and text output, start, stop, and rate control.
An ODP computational model of a cooperative binding object
Logé, Christophe; Najm, Elie; Chen, Ken
1997-12-01
A next generation of systems that should appear will have to manage simultaneously several geographically distributed users. These systems belong to the class of computer-supported cooperative work systems (CSCW). The development of such complex systems requires rigorous development methods and flexible open architectures. Open distributed processing (ODP) is a standardization effort that aims at providing such architectures. ODP features appropriate abstraction levels and a clear articulation between requirements, programming and infrastructure support. ODP advocates the use of formal methods for the specification of systems and components. The computational model, an object-based model, one of the abstraction levels identified within ODP, plays a central role in the global architecture. In this model, basic objects can be composed with communication and distribution abstractions (called binding objects) to form a computational specification of distributed systems, or applications. Computational specifications can then be mapped (in a mechanism akin to compilation) onto an engineering solution. We use an ODP-inspired method to computationally specify a cooperative system. We start from a general purpose component that we progressively refine into a collection of basic and binding objects. We focus on two issues of a co-authoring application, namely, dynamic reconfiguration and multiview synchronization. We discuss solutions for these issues and formalize them using the MT-LOTOS specification language that is currently studied in the ISO standardization formal description techniques group.
A model of proto-object based saliency
Russell, Alexander F.; Mihalaş, Stefan; von der Heydt, Rudiger; Niebur, Ernst; Etienne-Cummings, Ralph
2013-01-01
Organisms use the process of selective attention to optimally allocate their computational resources to the instantaneously most relevant subsets of a visual scene, ensuring that they can parse the scene in real time. Many models of bottom-up attentional selection assume that elementary image features, like intensity, color and orientation, attract attention. Gestalt psychologists, how-ever, argue that humans perceive whole objects before they analyze individual features. This is supported by recent psychophysical studies that show that objects predict eye-fixations better than features. In this report we present a neurally inspired algorithm of object based, bottom-up attention. The model rivals the performance of state of the art non-biologically plausible feature based algorithms (and outperforms biologically plausible feature based algorithms) in its ability to predict perceptual saliency (eye fixations and subjective interest points) in natural scenes. The model achieves this by computing saliency as a function of proto-objects that establish the perceptual organization of the scene. All computational mechanisms of the algorithm have direct neural correlates, and our results provide evidence for the interface theory of attention. PMID:24184601
Concept graphics: a language for medical knowledge.
Preiss, B; Kaltenbach, M; Zanazaka, J; Echave, V
1992-01-01
This paper makes a case for Concept Graphics, a novel form of medical knowledge representation. Concept Graphics are assemblies of icons each of which has a precise meaning. Individual icons are metaphors of the object or process they represent. Concept Graphics are analogs of pathological situations, their symptoms, signs and other relevant components necessary for a diagnosis. We propose three principal areas of application for Concept Graphics: medical education, medical records management and research based on medical records. Our earlier work in medical education showed a clear advantage in using Concept Graphics in parallel with text of equivalent information content, over text alone. A Concept Graphics based intelligent tutoring systems (ITS) is being developed. In the area of medical records management we are developing a system for the rapid identification of relevant records based on rapid visual screening. The Concept Graphics based system can reveal properties common to specific groups of records. As such the graphics are a research tool.
Nuclear reactors; graphical symbols
International Nuclear Information System (INIS)
1987-11-01
This standard contains graphical symbols that reveal the type of nuclear reactor and is used to design graphical and technical presentations. Distinguishing features for nuclear reactors are laid down in graphical symbols. (orig.) [de
Anacleto, Osvaldo; Queen, Catriona; Albers, Casper J.
Traffic flow data are routinely collected for many networks worldwide. These invariably large data sets can be used as part of a traffic management system, for which good traffic flow forecasting models are crucial. The linear multiregression dynamic model (LMDM) has been shown to be promising for
Kuchinke, Wolfgang; Ohmann, Christian; Verheij, Robert A; van Veen, Evert-Ben; Arvanitis, Theodoros N; Taweel, Adel; Delaney, Brendan C
2014-12-01
To develop a model describing core concepts and principles of data flow, data privacy and confidentiality, in a simple and flexible way, using concise process descriptions and a diagrammatic notation applied to research workflow processes. The model should help to generate robust data privacy frameworks for research done with patient data. Based on an exploration of EU legal requirements for data protection and privacy, data access policies, and existing privacy frameworks of research projects, basic concepts and common processes were extracted, described and incorporated into a model with a formal graphical representation and a standardised notation. The Unified Modelling Language (UML) notation was enriched by workflow and own symbols to enable the representation of extended data flow requirements, data privacy and data security requirements, privacy enhancing techniques (PET) and to allow privacy threat analysis for research scenarios. Our model is built upon the concept of three privacy zones (Care Zone, Non-care Zone and Research Zone) containing databases, data transformation operators, such as data linkers and privacy filters. Using these model components, a risk gradient for moving data from a zone of high risk for patient identification to a zone of low risk can be described. The model was applied to the analysis of data flows in several general clinical research use cases and two research scenarios from the TRANSFoRm project (e.g., finding patients for clinical research and linkage of databases). The model was validated by representing research done with the NIVEL Primary Care Database in the Netherlands. The model allows analysis of data privacy and confidentiality issues for research with patient data in a structured way and provides a framework to specify a privacy compliant data flow, to communicate privacy requirements and to identify weak points for an adequate implementation of data privacy. Copyright © 2014 Elsevier Ireland Ltd. All rights
Kim, Hui Taek; Ahn, Tae Young; Jang, Jae Hoon; Kim, Kang Hee; Lee, Sung Jae; Jung, Duk Young
2017-03-01
Three-dimensional (3D) computed tomography imaging is now being used to generate 3D models for planning orthopaedic surgery, but the process remains time consuming and expensive. For chronic radial head dislocation, we have designed a graphic overlay approach that employs selected 3D computer images and widely available software to simplify the process of osteotomy site selection. We studied 5 patients (2 traumatic and 3 congenital) with unilateral radial head dislocation. These patients were treated with surgery based on traditional radiographs, but they also had full sets of 3D CT imaging done both before and after their surgery: these 3D CT images form the basis for this study. From the 3D CT images, each patient generated 3 sets of 3D-printed bone models: 2 copies of the preoperative condition, and 1 copy of the postoperative condition. One set of the preoperative models was then actually osteotomized and fixed in the manner suggested by our graphic technique. Arcs of rotation of the 3 sets of 3D-printed bone models were then compared. Arcs of rotation of the 3 groups of bone models were significantly different, with the models osteotomized accordingly to our graphic technique having the widest arcs. For chronic radial head dislocation, our graphic overlay approach simplifies the selection of the osteotomy site(s). Three-dimensional-printed bone models suggest that this approach could improve range of motion of the forearm in actual surgical practice. Level IV-therapeutic study.
Arthur, Evan J; Brooks, Charles L
2016-04-15
Two fundamental challenges of simulating biologically relevant systems are the rapid calculation of the energy of solvation and the trajectory length of a given simulation. The Generalized Born model with a Simple sWitching function (GBSW) addresses these issues by using an efficient approximation of Poisson-Boltzmann (PB) theory to calculate each solute atom's free energy of solvation, the gradient of this potential, and the subsequent forces of solvation without the need for explicit solvent molecules. This study presents a parallel refactoring of the original GBSW algorithm and its implementation on newly available, low cost graphics chips with thousands of processing cores. Depending on the system size and nonbonded force cutoffs, the new GBSW algorithm offers speed increases of between one and two orders of magnitude over previous implementations while maintaining similar levels of accuracy. We find that much of the algorithm scales linearly with an increase of system size, which makes this water model cost effective for solvating large systems. Additionally, we utilize our GPU-accelerated GBSW model to fold the model system chignolin, and in doing so we demonstrate that these speed enhancements now make accessible folding studies of peptides and potentially small proteins. © 2016 Wiley Periodicals, Inc.
Energy Technology Data Exchange (ETDEWEB)
Christiansen, J. H.
2000-06-15
The Dynamic Information Architecture System (DIAS) is a flexible, extensible, object-based framework for developing and maintaining complex multidisciplinary simulations. The DIAS infrastructure makes it feasible to build and manipulate complex simulation scenarios in which many thousands of objects can interact via dozens to hundreds of concurrent dynamic processes. The flexibility and extensibility of the DIAS software infrastructure stem mainly from (1) the abstraction of object behaviors, (2) the encapsulation and formalization of model functionality, and (3) the mutability of domain object contents. DIAS simulation objects are inherently capable of highly flexible and heterogeneous spatial realizations. Geospatial graphical representation of DIAS simulation objects is addressed via the GeoViewer, an object-based GIS toolkit application developed at ANL. DIAS simulation capabilities have been extended by inclusion of societal process models generated by the Framework for Addressing Cooperative Extended Transactions (FACET), another object-based framework developed at Argonne National Laboratory. By using FACET models to implement societal behaviors of individuals and organizations within larger DIAS-based natural systems simulations, it has become possible to conveniently address a broad range of issues involving interaction and feedback among natural and societal processes. Example DIAS application areas discussed in this paper include a dynamic virtual oceanic environment, detailed simulation of clinical, physiological, and logistical aspects of health care delivery, and studies of agricultural sustainability of urban centers under environmental stress in ancient Mesopotamia.
Probabilistic Graphical Models for the Analysis and Synthesis of Musical Audio
Hoffmann, Matthew Douglas
Content-based Music Information Retrieval (MIR) systems seek to automatically extract meaningful information from musical audio signals. This thesis applies new and existing generative probabilistic models to several content-based MIR tasks: timbral similarity estimation, semantic annotation and retrieval, and latent source discovery and separation. In order to estimate how similar two songs sound to one another, we employ a Hierarchical Dirichlet Process (HDP) mixture model to discover a shared representation of the distribution of timbres in each song. Comparing songs under this shared representation yields better query-by-example retrieval quality and scalability than previous approaches. To predict what tags are likely to apply to a song (e.g., "rap," "happy," or "driving music"), we develop the Codeword Bernoulli Average (CBA) model, a simple and fast mixture-of-experts model. Despite its simplicity, CBA performs at least as well as state-of-the-art approaches at automatically annotating songs and finding to what songs in a database a given tag most applies. Finally, we address the problem of latent source discovery and separation by developing two Bayesian nonparametric models, the Shift-Invariant HDP and Gamma Process NMF. These models allow us to discover what sounds (e.g. bass drums, guitar chords, etc.) are present in a song or set of songs and to isolate or suppress individual source. These models' ability to decide how many latent sources are necessary to model the data is particularly valuable in this application, since it is impossible to guess a priori how many sounds will appear in a given song or set of songs. Once they have been fit to data, probabilistic models can also be used to drive the synthesis of new musical audio, both for creative purposes and to qualitatively diagnose what information a model does and does not capture. We also adapt the SIHDP model to create new versions of input audio with arbitrary sample sets, for example, to create
Modeling of Geological Objects and Geophysical Fields Using Haar Wavelets
A. S. Dolgal
2014-01-01
This article is a presentation of application of the fast wavelet transform with basic Haar functions for modeling the structural surfaces and geophysical fields, characterized by fractal features. The multiscale representation of experimental data allows reducing significantly a cost of the processing of large volume data and improving the interpretation quality. This paper presents the algorithms for sectionally prismatic approximation of geological objects, for preliminary estimation of th...
A graphical interface based model for wind turbine drive train dynamics
Energy Technology Data Exchange (ETDEWEB)
Manwell, J.F.; McGowan, J.G.; Abdulwahid, U.; Rogers, A. [Univ. of Massachusetts, Amherst, MA (United States); McNiff, B. [McNiff Light Industry, Blue Hill, ME (United States)
1996-12-31
This paper presents a summary of a wind turbine drive train dynamics code that has been under development at the University of Massachusetts, under National Renewable Energy Laboratory (NREL) support. The code is intended to be used to assist in the proper design and selection of drive train components. This work summarizes the development of the equations of motion for the model, and discusses the method of solution. In addition, a number of comparisons with analytical solutions and experimental field data are given. The summary includes conclusions and suggestions for future work on the model. 13 refs., 10 figs.
Directory of Open Access Journals (Sweden)
Giovanni M. Marchetti
2006-02-01
Full Text Available We describe some functions in the R package ggm to derive from a given Markov model, represented by a directed acyclic graph, different types of graphs induced after marginalizing over and conditioning on some of the variables. The package has a few basic functions that find the essential graph, the induced concentration and covariance graphs, and several types of chain graphs implied by the directed acyclic graph (DAG after grouping and reordering the variables. These functions can be useful to explore the impact of latent variables or of selection effects on a chosen data generating model.
THE INVESTMENT MODEL OF THE CONSTRUCTION OF PUBLIC OBJECTS
Directory of Open Access Journals (Sweden)
Reperger Šandor
2009-11-01
Full Text Available One of the possible models of the construction and use of sports objects, especi- ally indoor facilities (sports centres, halls, swimming pools, shooting alleys and others is the cooperation of the public and private sector, by the investment model of PPP (Pu- blic-Private Partnership. PPP (Public-Private Partnership construction is the new form of securing civil works, already known in the developed countries, in which the job of planning, construc- tion, functioning and financing is done by the private sector – in the scope of a precisely elaborated cooperation with the state. The state engages the private sector for the administering of the civil works. By public adverstisements and contests they will find the investors who accept the administe- ring of certain public works by themselves or with the help of project partners with their own resources (with 60-85% of bank loans, secure the conditions for conducting certain services (by using the objects, halls, etc until the expiration of the agreed deadline. The essence of PPP construction is the fact that an investor from the private sec- tor, chosen through a contest, realizes the project using its own means. The object beco- mes the property of the investor and it secures the regular functioning of the object with exclusive rights. The income from the functioning belongs to the investor, in return the costs of the functioning of the object, the upkeep, as well as the costs of the personnel and public utilities are the responsibility of the investor. The public use of the object is realised by the means that the authorised ministry and the partner from the contest in an agreement of the realization and functioning of the object accurately define the time of maintenance and the duration of the services on the behalf of social interest. From the time specified in the agreement the investor doesn’t charge precisely defined users for general and specific services. As Sebia, with all its
MODELING OF CONVECTIVE STREAMS IN PNEUMOBASIC OBJECTS (Part 2
Directory of Open Access Journals (Sweden)
B. M. Khroustalev
2015-01-01
Full Text Available The article presents modeling for investigation of aerodynamic processes on area sections (including a group of complex constructional works for different regimes of drop and wind streams and temperature conditions and in complex constructional works (for different regimes of heating and ventilation. There were developed different programs for innovation problems solution in the field of heat and mass exchange in three-dimensional space of pres- sures-speeds-temperatures of оbjects.The field of uses of pneumobasic objects: construction and roof of tennis courts, hockey pitches, swimming pools , and also exhibitions’ buildings, circus buildings, cafes, aqua parks, studios, mobile objects of medical purposes, hangars, garages, construction sites, service sta- tions and etc. Advantages of such objects are the possibility and simplicity of multiple instal- lation and demolition works. Their large-scale implementation is determined by temperature- moisture conditions under the shells.Analytical and calculating researches, real researches of thermodynamic parameters of heat and mass exchange, multifactorial processes of air in pneumobasic objects, their shells in a wide range of climatic parameters of air (January – December in the Republic of Belarus, in many geographical latitudes of many countries have shown that the limit of the possibility of optimizing wind loads, heat flow, acoustic effects is infinite (sports, residential, industrial, warehouse, the military-technical units (tanks, airplanes, etc.. In modeling of convective flows in pneumobasic objects (part 1 there are processes with higher dynamic parameters of the air flow for the characteristic pneumobasic object, carried out the calculation of the velocity field, temperature, pressure at the speed of access of air through the inflow holes up to 5 m/sec at the moments of times (20, 100, 200, 400 sec. The calculation was performed using the developed mathematical
Practical Implementation of a Graphics Turing Test
DEFF Research Database (Denmark)
Borg, Mathias; Johansen, Stine Schmieg; Thomsen, Dennis Lundgaard
2012-01-01
We present a practical implementation of a variation of the Turing Test for realistic computer graphics. The test determines whether virtual representations of objects appear as real as genuine objects. Two experiments were conducted wherein a real object and a similar virtual object is presented...... graphics. Based on the results from these experiments, future versions of the Graphics Turing Test could ease the restrictions currently necessary in order to test object telepresence under more general conditions. Furthermore, the test could be used to determine the minimum requirements to achieve object...
Green, Phil J.
1998-09-01
The graphic arts industry is increasingly reliant on telecommunications for the transfer of digital data for media production. There are, however, many other aspects of the business process between customers and suppliers that are suited to network-based interaction. The transaction between customer and producer can be separated into four data streams: briefing, content creation and production, and approval. Each of these data streams has specific requirements which lead to a matrix of needs for the different parties in the transaction. These needs are reviewed and proposals made for meeting them through a network service dedicated to the graphic arts. British Telecom, MCI and Scitex are currently beta-testing a service based on these proposals, known as the Digital Graphic Network. Once a managed network service is in place, it is possible to extend it to include a range of other services and third-party interactions, such as automated transfer of media production objects such as high-resolution images, fonts, color profiles, etc. from third-party content providers. The opportunities for users and third-party developers to develop a custom interface between the network and internal production processes and monitoring systems using open systems based on the Java language is described.
Wind field and trajectory models for tornado-propelled objects
International Nuclear Information System (INIS)
Anon
1978-01-01
This report contains the results of the second phase of a research program which has as its objective the development of a mathematical model to predict the trajectory of tornado-borne objects postulated to be in the vicinity of nuclear power plants. An improved tornado wind field model satisfies the no-slip ground boundary condition of fluid mechanics and includes the functional dependence of eddy viscosity with altitude. Sub-scale wind tunnel data are obtained for all of the missiles currently specified for nuclear plant design. Confirmatory full-scale data are obtained for a 12-inch pipe and automobile. The original six-degree-of-freedom trajectory model is modified to include the improved wind field and increased capability as to body shapes and inertial characteristics that can be handled. The improved trajectory model is used to calculate maximum credible speeds, which for all of the heavy missiles are considerably less than those currently specified for design. Equivalent coefficients for use in three-degree-of-freedom models are developed and the sensitivity of range and speed to various trajectory parameters for the 12-inch diameter pipe is examined
PIM Pedagogy: Toward a Loosely Unified Model for Teaching and Studying Comics and Graphic Novels
Carter, James B.
2015-01-01
The article debuts and explains "PIM" pedagogy, a construct for teaching comics at the secondary- and post-secondary levels and for deep reading/studying comics. The PIM model for considering comics is actually based in major precepts of education studies, namely constructivist foundations of learning, and loosely unifies constructs…
Using graphic organizers in intercultural education
Liliana Ciascai
2009-01-01
Graphic organizers are instruments of representation, illustration and modeling of information. In the educational practice they are used for building, and systematization of knowledge. Graphic organizers are instruments that addressed mostly visual learning style, but their use is beneficial to all learners. In this paper we illustrate the use of graphic organizers in the intercultural education. We present a set of graphic organizers used in the scientific literature and then we describe te...
Graphical surface-vegetation-atmosphere transfer (SVAT) model as a pedagogical and research tool
Gillies, Robert R.; Carlson, Toby N.; Ripley, David A.J.
1998-01-01
This paper considers, by example, the use of a Surface-Atmosphere-Vegetation-Transfer (SVAT), Atmospheric Boundary Layer (ABL) model designed as a pedagogical tool. The goal of the computer software and the approach is to improve the efficiency and effectiveness of communicating often complex and mathematical based disciplines (e.g., micrometeorology, land surface processes) to the non-specialist interested in studying problems involving interactions between vegetation and the atmosphere and,...
Quantum-assisted learning of graphical models with arbitrary pairwise connectivity
Realpe-Gómez, John; Benedetti, Marcello; Biswas, Rupak; Perdomo-Ortiz, Alejandro
Mainstream machine learning techniques rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speedup these tasks. However, some pressing challenges in state-of-the-art quantum annealers have to be overcome before we can assess their actual performance. The sparse connectivity, resulting from the local interaction between quantum bits in physical hardware implementations, is considered the most severe limitation to the quality of constructing powerful machine learning models. Here we show how to surpass this `curse of limited connectivity' bottleneck and illustrate our findings by training probabilistic generative models with arbitrary pairwise connectivity on a real dataset of handwritten digits and two synthetic datasets in experiments with up to 940 quantum bits. Our model can be trained in quantum hardware without full knowledge of the effective parameters specifying the corresponding Boltzmann-like distribution. Therefore, the need to infer the effective temperature at each iteration is avoided, speeding up learning, and the effect of noise in the control parameters is mitigated, improving accuracy. This work was supported in part by NASA, AFRL, ODNI, and IARPA.
Formal Model for Data Dependency Analysis between Controls and Actions of a Graphical User Interface
Directory of Open Access Journals (Sweden)
SKVORC, D.
2012-02-01
Full Text Available End-user development is an emerging computer science discipline that provides programming paradigms, techniques, and tools suitable for users not trained in software engineering. One of the techniques that allow ordinary computer users to develop their own applications without the need to learn a classic programming language is a GUI-level programming based on programming-by-demonstration. To build wizard-based tools that assist users in application development and to verify the correctness of user programs, a computer-supported method for GUI-level data dependency analysis is necessary. Therefore, formal model for GUI representation is needed. In this paper, we present a finite state machine for modeling the data dependencies between GUI controls and GUI actions. Furthermore, we present an algorithm for automatic construction of finite state machine for arbitrary GUI application. We show that proposed state aggregation scheme successfully manages state explosion in state machine construction algorithm, which makes the model applicable for applications with complex GUIs.
Selecting high-dimensional mixed graphical models using minimal AIC or BIC forests
Directory of Open Access Journals (Sweden)
Labouriau Rodrigo
2010-01-01
Full Text Available Abstract Background Chow and Liu showed that the maximum likelihood tree for multivariate discrete distributions may be found using a maximum weight spanning tree algorithm, for example Kruskal's algorithm. The efficiency of the algorithm makes it tractable for high-dimensional problems. Results We extend Chow and Liu's approach in two ways: first, to find the forest optimizing a penalized likelihood criterion, for example AIC or BIC, and second, to handle data with both discrete and Gaussian variables. We apply the approach to three datasets: two from gene expression studies and the third from a genetics of gene expression study. The minimal BIC forest supplements a conventional analysis of differential expression by providing a tentative network for the differentially expressed genes. In the genetics of gene expression context the method identifies a network approximating the joint distribution of the DNA markers and the gene expression levels. Conclusions The approach is generally useful as a preliminary step towards understanding the overall dependence structure of high-dimensional discrete and/or continuous data. Trees and forests are unrealistically simple models for biological systems, but can provide useful insights. Uses include the following: identification of distinct connected components, which can be analysed separately (dimension reduction; identification of neighbourhoods for more detailed analyses; as initial models for search algorithms with a larger search space, for example decomposable models or Bayesian networks; and identification of interesting features, such as hub nodes.
Kriston, Levente; Melchior, Hanne; Hergert, Anika; Bergelt, Corinna; Watzke, Birgit; Schulz, Holger; von Wolff, Alessa
2011-01-01
The aim of our study was to develop a graphical tool that can be used in addition to standard statistical criteria to support decisions on the number of classes in explorative categorical latent variable modeling for rehabilitation research. Data from two rehabilitation research projects were used. In the first study, a latent profile analysis was…
Learning-based stochastic object models for characterizing anatomical variations
Dolly, Steven R.; Lou, Yang; Anastasio, Mark A.; Li, Hua
2018-03-01
It is widely known that the optimization of imaging systems based on objective, task-based measures of image quality via computer-simulation requires the use of a stochastic object model (SOM). However, the development of computationally tractable SOMs that can accurately model the statistical variations in human anatomy within a specified ensemble of patients remains a challenging task. Previously reported numerical anatomic models lack the ability to accurately model inter-patient and inter-organ variations in human anatomy among a broad patient population, mainly because they are established on image data corresponding to a few of patients and individual anatomic organs. This may introduce phantom-specific bias into computer-simulation studies, where the study result is heavily dependent on which phantom is used. In certain applications, however, databases of high-quality volumetric images and organ contours are available that can facilitate this SOM development. In this work, a novel and tractable methodology for learning a SOM and generating numerical phantoms from a set of volumetric training images is developed. The proposed methodology learns geometric attribute distributions (GAD) of human anatomic organs from a broad patient population, which characterize both centroid relationships between neighboring organs and anatomic shape similarity of individual organs among patients. By randomly sampling the learned centroid and shape GADs with the constraints of the respective principal attribute variations learned from the training data, an ensemble of stochastic objects can be created. The randomness in organ shape and position reflects the learned variability of human anatomy. To demonstrate the methodology, a SOM of an adult male pelvis is computed and examples of corresponding numerical phantoms are created.
International Nuclear Information System (INIS)
Aoki, Yukimasa; Onogi, Yuzou; Nakagawa, Keiichi; Akanuma, Atsuo; Iio, Masahiro; Sakata, Kouichi; Karasawa, Katsuyuki; Kaneko, Katsutaro.
1988-01-01
Both physical dose distribution and radiobiological effect should be required for radiotherapy optimization, however, there is a tendency that they are handled separately and that the former is processed in the shape of dose distribution and the latter is estimated quantitatively as point values. Those two parameters are combined in the iso-TDF map, where various responses of organs are not considered. In this study, the iso-effect distribution and the dose distributions are compared to investigate optimization for radiation treatment. A computer program is developed to construct an image of iso-effect from both various response values of internal organs and a certain physical dose and to show the integral value of certain region of interest. NSD-TDF model and LQ model are adopted and presumptive cases of esophagus cancer and pituitary tumor are employed. The results show that the effect to normal tissues increases with one port a day technique compared with plural ports a day technique and the latter is most preferable in both cases. It will require a proper weighting function to establish a generalized parameter for both spatial and chronological optimization by integrating dose and biological effect. The volume effect in biological responses will be argued with three dimensional implementation of this program. (author)
Peebles, David; Cheng, Peter C H
2003-01-01
We report an investigation into the processes involved in a common graph-reading task using two types of Cartesian graph. We describe an experiment and eye movement study, the results of which show that optimal scan paths assumed in the task analysis approximate the detailed sequences of saccades made by individuals. The research demonstrates the computational inequivalence of two sets of informationally equivalent graphs and illustrates how the computational advantages of a representation outweigh factors such as user unfamiliarity. We describe two models, using the ACT rational perceptual motor (ACT-R/PM) cognitive architecture, that replicate the pattern of observed response latencies and the complex scan paths revealed by the eye movement study. Finally, we outline three guidelines for designers of visual displays: Designers should (a) consider how different quantities are encoded within any chosen representational format, (b) consider the full range of alternative varieties of a given task, and (c) balance the cost of familiarization with the computational advantages of less familiar representations. Actual or potential applications of this research include informing the design and selection of appropriate visual displays and illustrating the practice and utility of task analysis, eye tracking, and cognitive modeling for understanding interactive tasks with external representations.
Modeling and Simulation of Grasping of Deformable Objects
DEFF Research Database (Denmark)
Fugl, Andreas Rune
. The purpose of this thesis is to address the modeling and simulation of deformable objects, as applied to robotic grasping and manipulation. The main contributions of this work are: An evaluation of 3D linear elasticity used for robot grasping as implemented by a Finite Difference Method supporting regular...... and adaptively refined grids, a stable and accurate non-linear 2D beam model supporting large deformations and difficult boundary effects, a method for the estimation of material properties and pose from depth and colour images, a method for the learning of Peg-in-Hole actions, an outline for Laying-Down actions...... as well a throughout evaluation of the accuracy of models under large deformations....
Objective Characterization of Snow Microstructure for Microwave Emission Modeling
Durand, Michael; Kim, Edward J.; Molotch, Noah P.; Margulis, Steven A.; Courville, Zoe; Malzler, Christian
2012-01-01
Passive microwave (PM) measurements are sensitive to the presence and quantity of snow, a fact that has long been used to monitor snowcover from space. In order to estimate total snow water equivalent (SWE) within PM footprints (on the order of approx 100 sq km), it is prerequisite to understand snow microwave emission at the point scale and how microwave radiation integrates spatially; the former is the topic of this paper. Snow microstructure is one of the fundamental controls on the propagation of microwave radiation through snow. Our goal in this study is to evaluate the prospects for driving the Microwave Emission Model of Layered Snowpacks with objective measurements of snow specific surface area to reproduce measured brightness temperatures when forced with objective measurements of snow specific surface area (S). This eliminates the need to treat the grain size as a free-fit parameter.
Advertising Model of Residential Real Estate Object in Lithuania
Directory of Open Access Journals (Sweden)
Jelena Mazaj
2012-07-01
Full Text Available Since the year 2000, during the period of economic growth, the real estate market has been rapidly expanding. During this period advertising of real estate objects was implemented using one set of similar channels (press advertising, Internet advertising, leaflets with contact information of real estate agents and others, however the start of the economic recession has intensified the competition in the market and forced companies to search for new advertising means or to diversify the advertising package. The article presents real estate property, as a product, one of the marketing components – including advertising, conclusions and suggestions based on conducted surveys and a model for advertising the residential real estate objects.Article in Lithuanian
Learning optimized features for hierarchical models of invariant object recognition.
Wersing, Heiko; Körner, Edgar
2003-07-01
There is an ongoing debate over the capabilities of hierarchical neural feedforward architectures for performing real-world invariant object recognition. Although a variety of hierarchical models exists, appropriate supervised and unsupervised learning methods are still an issue of intense research. We propose a feedforward model for recognition that shares components like weight sharing, pooling stages, and competitive nonlinearities with earlier approaches but focuses on new methods for learning optimal feature-detecting cells in intermediate stages of the hierarchical network. We show that principles of sparse coding, which were previously mostly applied to the initial feature detection stages, can also be employed to obtain optimized intermediate complex features. We suggest a new approach to optimize the learning of sparse features under the constraints of a weight-sharing or convolutional architecture that uses pooling operations to achieve gradual invariance in the feature hierarchy. The approach explicitly enforces symmetry constraints like translation invariance on the feature set. This leads to a dimension reduction in the search space of optimal features and allows determining more efficiently the basis representatives, which achieve a sparse decomposition of the input. We analyze the quality of the learned feature representation by investigating the recognition performance of the resulting hierarchical network on object and face databases. We show that a hierarchy with features learned on a single object data set can also be applied to face recognition without parameter changes and is competitive with other recent machine learning recognition approaches. To investigate the effect of the interplay between sparse coding and processing nonlinearities, we also consider alternative feedforward pooling nonlinearities such as presynaptic maximum selection and sum-of-squares integration. The comparison shows that a combination of strong competitive
Achieving interoperability for metadata registries using comparative object modeling.
Park, Yu Rang; Kim, Ju Han
2010-01-01
Achieving data interoperability between organizations relies upon agreed meaning and representation (metadata) of data. For managing and registering metadata, many organizations have built metadata registries (MDRs) in various domains based on international standard for MDR framework, ISO/IEC 11179. Following this trend, two pubic MDRs in biomedical domain have been created, United States Health Information Knowledgebase (USHIK) and cancer Data Standards Registry and Repository (caDSR), from U.S. Department of Health & Human Services and National Cancer Institute (NCI), respectively. Most MDRs are implemented with indiscriminate extending for satisfying organization-specific needs and solving semantic and structural limitation of ISO/IEC 11179. As a result it is difficult to address interoperability among multiple MDRs. In this paper, we propose an integrated metadata object model for achieving interoperability among multiple MDRs. To evaluate this model, we developed an XML Schema Definition (XSD)-based metadata exchange format. We created an XSD-based metadata exporter, supporting both the integrated metadata object model and organization-specific MDR formats.
Miranda, Diogo Julien; Chao, Lung Wen
2018-03-01
Preliminary studies suggest the need of a global vision in academic reform, leading to education re-invention. This would include problem-based education using transversal topics, developing of thinking skills, social interaction, and information-processing skills. We aimed to develop a new educational model in health with modular components to be broadcast and applied as a tele-education course. We developed a systematic model based on a "Skills and Goals Matrix" to adapt scientific contents on fictional screenplays, three-dimensional (3D) computer graphics of the human body, and interactive documentaries. We selected 13 topics based on youth vulnerabilities in Brazil to be disseminated through a television show with 15 episodes. We developed scientific content for each theme, naturally inserting it into screenplays, together with 3D sequences and interactive documentaries. The modular structure was then adapted to a distance-learning course. The television show was broadcast on national television for two consecutive years to an estimated audience of 30 million homes, and ever since on an Internet Protocol Television (IPTV) channel. It was also reorganized as a tele-education course for 2 years, reaching 1,180 subscriptions from all 27 Brazilian states, resulting in 240 graduates. Positive results indicate the feasibility, acceptability, and effectiveness of a model of modular entertainment audio-visual productions using health and education integrated concepts. This structure also allowed the model to be interconnected with other sources and applied as tele-education course, educating, informing, and stimulating the behavior change. Future works should reinforce this joint structure of telehealth, communication, and education.
Modeling of Geological Objects and Geophysical Fields Using Haar Wavelets
Directory of Open Access Journals (Sweden)
A. S. Dolgal
2014-12-01
Full Text Available This article is a presentation of application of the fast wavelet transform with basic Haar functions for modeling the structural surfaces and geophysical fields, characterized by fractal features. The multiscale representation of experimental data allows reducing significantly a cost of the processing of large volume data and improving the interpretation quality. This paper presents the algorithms for sectionally prismatic approximation of geological objects, for preliminary estimation of the number of equivalent sources for the analytical approximation of fields, and for determination of the rock magnetization in the upper part of the geological section.
Analysis and simulation of industrial distillation processes using a graphical system design model
Boca, Maria Loredana; Dobra, Remus; Dragos, Pasculescu; Ahmad, Mohammad Ayaz
2016-12-01
The separation column used for experimentations one model can be configured in two ways: one - two columns of different diameters placed one within the other extension, and second way, one column with set diameter [1], [2]. The column separates the carbon isotopes based on the cryogenic distillation of pure carbon monoxide, which is fed at a constant flow rate as a gas through the feeding system [1],[2]. Based on numerical control systems used in virtual instrumentation was done some simulations of the distillation process in order to obtain of the isotope 13C at high concentrations. The experimental installation for cryogenic separation can be configured from the point of view of the separation column in two ways: Cascade - two columns of different diameters and placed one in the extension of the other column, and second one column with a set diameter. It is proposed that this installation is controlled to achieve data using a data acquisition tool and professional software that will process information from the isotopic column based on a logical dedicated algorithm. Classical isotopic column will be controlled automatically, and information about the main parameters will be monitored and properly display using one program. Take in consideration the very-low operating temperature, an efficient thermal isolation vacuum jacket is necessary. Since the "elementary separation ratio" [2] is very close to unity in order to raise the (13C) isotope concentration up to a desired level, a permanent counter current of the liquid-gaseous phases of the carbon monoxide is created by the main elements of the equipment: the boiler in the bottom-side of the column and the condenser in the top-side.
Functional information technology in geometry-graphic training of engineers
Directory of Open Access Journals (Sweden)
Irina D. Stolbova
2017-01-01
Full Text Available In the last decade, information technology fundamentally changed the design activity and made significant adjustments to the development of design documentation. Electronic drawings and 3d-models appeared instead of paper drawings and the traditional form of the design documentation. Geometric modeling of 3d-technology has replaced the graphic design technology. Standards on the electronic models are introduced. Electronic prototypes and 3d-printing contribute to the spread of rapid prototyping technologies.In these conditions, the task to find the new learning technology, corresponding to the level of development of information technologies and meeting the requirements of modern design and manufacturing technologies, comes to the fore. The purpose of this paper — the analysis of the information technology capabilities in the formation of geometrical-graphic competences, happening in the base of graphic training of students of technical university. Traditionally, basic graphic training of students in the junior university courses consisted in consecutive studying of the descriptive geometry, engineering and computer graphics. Today, the use of integrative approach is relevant, but the role of computer graphics varies considerably. It is not only an object of study, but also a learning tool, the core base of graphic training of students. Computer graphics is an efficient mechanism for the development of students’ spatial thinking. The role of instrumental training of students to the wide use of CAD-systems increases in the solution of educational problems and in the implementation of project tasks, which corresponds to the modern requirements of the professional work of the designer-constructor.In this paper, the following methods are used: system analysis, synthesis, simulation.General geometric-graphic training model of students of innovation orientation, based on the use of a wide range of computer technology is developed. The
Directory of Open Access Journals (Sweden)
K S Mwitondi
2013-05-01
Full Text Available Differences in modelling techniques and model performance assessments typically impinge on the quality of knowledge extraction from data. We propose an algorithm for determining optimal patterns in data by separately training and testing three decision tree models in the Pima Indians Diabetes and the Bupa Liver Disorders datasets. Model performance is assessed using ROC curves and the Youden Index. Moving differences between sequential fitted parameters are then extracted, and their respective probability density estimations are used to track their variability using an iterative graphical data visualisation technique developed for this purpose. Our results show that the proposed strategy separates the groups more robustly than the plain ROC/Youden approach, eliminates obscurity, and minimizes over-fitting. Further, the algorithm can easily be understood by non-specialists and demonstrates multi-disciplinary compliance.
An SML Driven Graphical User Interface and Application Management Toolkit
International Nuclear Information System (INIS)
White, Greg R
2002-01-01
In the past, the features of a user interface were limited by those available in the existing graphical widgets it used. Now, improvements in processor speed have fostered the emergence of interpreted languages, in which the appropriate method to render a given data object can be loaded at runtime. XML can be used to precisely describe the association of data types with their graphical handling (beans), and Java provides an especially rich environment for programming the graphics. We present a graphical user interface builder based on Java Beans and XML, in which the graphical screens are described textually (in files or a database) in terms of their screen components. Each component may be a simple text read back, or a complex plot. The programming model provides for dynamic data pertaining to a component to be forwarded synchronously or asynchronously, to the appropriate handler, which may be a built-in method, or a complex applet. This work was initially motivated by the need to move the legacy VMS display interface of the SLAC Control Program to another platform while preserving all of its existing functionality. However the model allows us a powerful and generic system for adding new kinds of graphics, such as Matlab, data sources, such as EPICS, middleware, such as AIDA[1], and transport, such as XML and SOAP. The system will also include a management console, which will be able to report on the present usage of the system, for instance who is running it where and connected to which channels
WE-E-BRE-05: Ensemble of Graphical Models for Predicting Radiation Pneumontis Risk
Energy Technology Data Exchange (ETDEWEB)
Lee, S; Ybarra, N; Jeyaseelan, K; El Naqa, I [McGill University, Montreal, Quebec (Canada); Faria, S; Kopek, N [Montreal General Hospital, Montreal, Quebec (Canada)
2014-06-15
Purpose: We propose a prior knowledge-based approach to construct an interaction graph of biological and dosimetric radiation pneumontis (RP) covariates for the purpose of developing a RP risk classifier. Methods: We recruited 59 NSCLC patients who received curative radiotherapy with minimum 6 month follow-up. 16 RP events was observed (CTCAE grade ≥2). Blood serum was collected from every patient before (pre-RT) and during RT (mid-RT). From each sample the concentration of the following five candidate biomarkers were taken as covariates: alpha-2-macroglobulin (α2M), angiotensin converting enzyme (ACE), transforming growth factor β (TGF-β), interleukin-6 (IL-6), and osteopontin (OPN). Dose-volumetric parameters were also included as covariates. The number of biological and dosimetric covariates was reduced by a variable selection scheme implemented by L1-regularized logistic regression (LASSO). Posterior probability distribution of interaction graphs between the selected variables was estimated from the data under the literature-based prior knowledge to weight more heavily the graphs that contain the expected associations. A graph ensemble was formed by averaging the most probable graphs weighted by their posterior, creating a Bayesian Network (BN)-based RP risk classifier. Results: The LASSO selected the following 7 RP covariates: (1) pre-RT concentration level of α2M, (2) α2M level mid- RT/pre-RT, (3) pre-RT IL6 level, (4) IL6 level mid-RT/pre-RT, (5) ACE mid-RT/pre-RT, (6) PTV volume, and (7) mean lung dose (MLD). The ensemble BN model achieved the maximum sensitivity/specificity of 81%/84% and outperformed univariate dosimetric predictors as shown by larger AUC values (0.78∼0.81) compared with MLD (0.61), V20 (0.65) and V30 (0.70). The ensembles obtained by incorporating the prior knowledge improved classification performance for the ensemble size 5∼50. Conclusion: We demonstrated a probabilistic ensemble method to detect robust associations between
A knowledge discovery object model API for Java
Directory of Open Access Journals (Sweden)
Jones Steven JM
2003-10-01
Full Text Available Abstract Background Biological data resources have become heterogeneous and derive from multiple sources. This introduces challenges in the management and utilization of this data in software development. Although efforts are underway to create a standard format for the transmission and storage of biological data, this objective has yet to be fully realized. Results This work describes an application programming interface (API that provides a framework for developing an effective biological knowledge ontology for Java-based software projects. The API provides a robust framework for the data acquisition and management needs of an ontology implementation. In addition, the API contains classes to assist in creating GUIs to represent this data visually. Conclusions The Knowledge Discovery Object Model (KDOM API is particularly useful for medium to large applications, or for a number of smaller software projects with common characteristics or objectives. KDOM can be coupled effectively with other biologically relevant APIs and classes. Source code, libraries, documentation and examples are available at http://www.bcgsc.ca/bioinfo/software.
Mathematical model of innovative sustainability “green” construction object
Directory of Open Access Journals (Sweden)
Slesarev Michail
2016-01-01
Full Text Available The paper addresses the issue of finding sustainability of “green” innovative processes in interaction between construction activities and the environment. The problem of today’s construction science is stated as comprehensive integration and automation of natural and artificial intellects within systems that ensure environmental safety of construction based on innovative sustainability of “green” technologies in the life environment, and “green” innovative products. The suggested solution to the problem should formalize sustainability models and methods for interpretation of optimization mathematical modeling problems respective to problems of environmental-based innovative process management, adapted to construction of “green” objects, “green” construction technologies, “green” innovative materials and structures.
A short-range objective nocturnal temperature forecasting model
Sutherland, R. A.
1980-01-01
A relatively simple, objective, nocturnal temperature forecasting model suitable for freezing and near-freezing conditions has been designed so that a user, presumably a weather forecaster, can put in standard meteorological data at a particular location and receive an hour-by-hour prediction of surface and air temperatures for that location for an entire night. The user has the option of putting in his own estimates of wind speeds and background sky radiation which are treated as independent variables. An analysis of 141 test runs show that 57.4% of the time the model predicts to within 1 C for the best cases and to within 3 C for 98.0% of all cases.
Models for predicting objective function weights in prostate cancer IMRT
International Nuclear Information System (INIS)
Boutilier, Justin J.; Lee, Taewoo; Craig, Tim; Sharpe, Michael B.; Chan, Timothy C. Y.
2015-01-01
Purpose: To develop and evaluate the clinical applicability of advanced machine learning models that simultaneously predict multiple optimization objective function weights from patient geometry for intensity-modulated radiation therapy of prostate cancer. Methods: A previously developed inverse optimization method was applied retrospectively to determine optimal objective function weights for 315 treated patients. The authors used an overlap volume ratio (OV) of bladder and rectum for different PTV expansions and overlap volume histogram slopes (OVSR and OVSB for the rectum and bladder, respectively) as explanatory variables that quantify patient geometry. Using the optimal weights as ground truth, the authors trained and applied three prediction models: logistic regression (LR), multinomial logistic regression (MLR), and weighted K-nearest neighbor (KNN). The population average of the optimal objective function weights was also calculated. Results: The OV at 0.4 cm and OVSR at 0.1 cm features were found to be the most predictive of the weights. The authors observed comparable performance (i.e., no statistically significant difference) between LR, MLR, and KNN methodologies, with LR appearing to perform the best. All three machine learning models outperformed the population average by a statistically significant amount over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and dose to the bladder, rectum, CTV, and PTV. When comparing the weights directly, the LR model predicted bladder and rectum weights that had, on average, a 73% and 74% relative improvement over the population average weights, respectively. The treatment plans resulting from the LR weights had, on average, a rectum V70Gy that was 35% closer to the clinical plan and a bladder V70Gy that was 29% closer, compared to the population average weights. Similar results were observed for all other clinical metrics. Conclusions: The authors demonstrated that the KNN and MLR
Objective models of compressed breast shapes undergoing mammography
Feng, Steve Si Jia; Patel, Bhavika; Sechopoulos, Ioannis
2013-01-01
Purpose: To develop models of compressed breasts undergoing mammography based on objective analysis, that are capable of accurately representing breast shapes in acquired clinical images and generating new, clinically realistic shapes. Methods: An automated edge detection algorithm was used to catalogue the breast shapes of clinically acquired cranio-caudal (CC) and medio-lateral oblique (MLO) view mammograms from a large database of digital mammography images. Principal component analysis (PCA) was performed on these shapes to reduce the information contained within the shapes to a small number of linearly independent variables. The breast shape models, one of each view, were developed from the identified principal components, and their ability to reproduce the shape of breasts from an independent set of mammograms not used in the PCA, was assessed both visually and quantitatively by calculating the average distance error (ADE). Results: The PCA breast shape models of the CC and MLO mammographic views based on six principal components, in which 99.2% and 98.0%, respectively, of the total variance of the dataset is contained, were found to be able to reproduce breast shapes with strong fidelity (CC view mean ADE = 0.90 mm, MLO view mean ADE = 1.43 mm) and to generate new clinically realistic shapes. The PCA models based on fewer principal components were also successful, but to a lesser degree, as the two-component model exhibited a mean ADE = 2.99 mm for the CC view, and a mean ADE = 4.63 mm for the MLO view. The four-component models exhibited a mean ADE = 1.47 mm for the CC view and a mean ADE = 2.14 mm for the MLO view. Paired t-tests of the ADE values of each image between models showed that these differences were statistically significant (max p-value = 0.0247). Visual examination of modeled breast shapes confirmed these results. Histograms of the PCA parameters associated with the six principal components were fitted with Gaussian distributions. The six
Objective models of compressed breast shapes undergoing mammography
Energy Technology Data Exchange (ETDEWEB)
Feng, Steve Si Jia [Department of Biomedical Engineering, Georgia Institute of Technology and Emory University and Department of Radiology and Imaging Sciences, Emory University, 1701 Uppergate Drive Northeast, Suite 5018, Atlanta, Georgia 30322 (United States); Patel, Bhavika [Department of Radiology and Imaging Sciences, Emory University, 1701 Uppergate Drive Northeast, Suite 5018, Atlanta, Georgia 30322 (United States); Sechopoulos, Ioannis [Departments of Radiology and Imaging Sciences, Hematology and Medical Oncology and Winship Cancer Institute, Emory University, 1701 Uppergate Drive Northeast, Suite 5018, Atlanta, Georgia 30322 (United States)
2013-03-15
Purpose: To develop models of compressed breasts undergoing mammography based on objective analysis, that are capable of accurately representing breast shapes in acquired clinical images and generating new, clinically realistic shapes. Methods: An automated edge detection algorithm was used to catalogue the breast shapes of clinically acquired cranio-caudal (CC) and medio-lateral oblique (MLO) view mammograms from a large database of digital mammography images. Principal component analysis (PCA) was performed on these shapes to reduce the information contained within the shapes to a small number of linearly independent variables. The breast shape models, one of each view, were developed from the identified principal components, and their ability to reproduce the shape of breasts from an independent set of mammograms not used in the PCA, was assessed both visually and quantitatively by calculating the average distance error (ADE). Results: The PCA breast shape models of the CC and MLO mammographic views based on six principal components, in which 99.2% and 98.0%, respectively, of the total variance of the dataset is contained, were found to be able to reproduce breast shapes with strong fidelity (CC view mean ADE = 0.90 mm, MLO view mean ADE = 1.43 mm) and to generate new clinically realistic shapes. The PCA models based on fewer principal components were also successful, but to a lesser degree, as the two-component model exhibited a mean ADE = 2.99 mm for the CC view, and a mean ADE = 4.63 mm for the MLO view. The four-component models exhibited a mean ADE = 1.47 mm for the CC view and a mean ADE = 2.14 mm for the MLO view. Paired t-tests of the ADE values of each image between models showed that these differences were statistically significant (max p-value = 0.0247). Visual examination of modeled breast shapes confirmed these results. Histograms of the PCA parameters associated with the six principal components were fitted with Gaussian distributions. The six
Objective models of compressed breast shapes undergoing mammography
International Nuclear Information System (INIS)
Feng, Steve Si Jia; Patel, Bhavika; Sechopoulos, Ioannis
2013-01-01
Purpose: To develop models of compressed breasts undergoing mammography based on objective analysis, that are capable of accurately representing breast shapes in acquired clinical images and generating new, clinically realistic shapes. Methods: An automated edge detection algorithm was used to catalogue the breast shapes of clinically acquired cranio-caudal (CC) and medio-lateral oblique (MLO) view mammograms from a large database of digital mammography images. Principal component analysis (PCA) was performed on these shapes to reduce the information contained within the shapes to a small number of linearly independent variables. The breast shape models, one of each view, were developed from the identified principal components, and their ability to reproduce the shape of breasts from an independent set of mammograms not used in the PCA, was assessed both visually and quantitatively by calculating the average distance error (ADE). Results: The PCA breast shape models of the CC and MLO mammographic views based on six principal components, in which 99.2% and 98.0%, respectively, of the total variance of the dataset is contained, were found to be able to reproduce breast shapes with strong fidelity (CC view mean ADE = 0.90 mm, MLO view mean ADE = 1.43 mm) and to generate new clinically realistic shapes. The PCA models based on fewer principal components were also successful, but to a lesser degree, as the two-component model exhibited a mean ADE = 2.99 mm for the CC view, and a mean ADE = 4.63 mm for the MLO view. The four-component models exhibited a mean ADE = 1.47 mm for the CC view and a mean ADE = 2.14 mm for the MLO view. Paired t-tests of the ADE values of each image between models showed that these differences were statistically significant (max p-value = 0.0247). Visual examination of modeled breast shapes confirmed these results. Histograms of the PCA parameters associated with the six principal components were fitted with Gaussian distributions. The six
Directory of Open Access Journals (Sweden)
Justin D. Brown
2000-11-01
Full Text Available The long-held Ã¢Â€Âœshielding coneÃ¢Â€Â model of the through-space NMR shielding effect of a carbon-carbon double bond predicts only the effect of the magnetic anisotropy of the double bond; it ignores other important contributors to the overall shielding. GIAO-SCF and GIAO-MP2 calculations have been performed on a simple model system, methane moved sequentially above ethene or 2-methylpropene. These calculations permit the net NMR shielding surface to be mapped. Based on those results, a new and very different graphical model for predicting the effect of a protonÃ¢Â€Â™s position relative to a carbon-carbon double bond on its chemical shift is presented.
Polarimetry of Solar System Objects: Observations vs. Models
Yanamandra-Fisher, P. A.
2014-04-01
The overarching goals for the remote sensing and robotic exploration of planetary systems are: (1) understanding the formation of planetary systems and their diversity; and (2) search for habitability. Since all objects have unique polarimetric signatures inclusion of spectrophotopolarimetry as a complementary approach to standard techniques of imaging and spectroscopy, provides insight into the scattering properties of the planetary media. Specifically, linear and circular polarimetric signatures of the object arise from different physical processes and their study proves essential to the characterization of the object. Linear polarization of reflected light by various solar system objects provides insight into the scattering characteristics of atmospheric aerosols and hazes? and surficial properties of atmosphereless bodies. Many optically active materials are anisotropic and so their scattering properties differ with the object's principal axes (such as dichroic or birefringent materials) and are crystalline in structure instead of amorphous, (eg., the presence of olivines and silicates in cometary dust and circumstellar disks? Titan, etc.). Ices (water and other species) are abundant in the system indicated in their near - infrared spectra. Gas giants form outside the frost line (where ices condense), and their satellites and ring systems exhibit signature of water ice? clathrates, nonices (Si, C, Fe) in their NIR spectra and spectral dependence of linear polarization. Additionally, spectral dependence of polarization is important to separate the macroscopic (bulk) properties of the scattering medium from the microscopic (particulate) properties of the scattering medium. Circular polarization, on the other hand, is indicative of magnetic fields and biologically active molecules, necessary for habitability. These applications suffer from lack of detailed observations, instrumentation, dedicated missions and numericalretrieval methods. With recent discoveries and
Model Management Via Dependencies Between Variables: An Indexical Reasoning in Mathematical Modeling
National Research Council Canada - National Science Library
Rehber, Devrim
1997-01-01
... declarations and formal model definitions. The utilization of the standard graphical screen objects of a graphics-based operating system provides enhanced visualization of models and more cohesive human-computer interaction...
Computational Data Modeling for Network-Constrained Moving Objects
DEFF Research Database (Denmark)
Jensen, Christian Søndergaard; Speicys, L.; Kligys, A.
2003-01-01
Advances in wireless communications, positioning technology, and other hardware technologies combine to enable a range of applications that use a mobile users geo-spatial data to deliver online, location-enhanced services, often referred to as location-based services. Assuming that the service...... users are constrained to a transportation network, this paper develops data structures that model road networks, the mobile users, and stationary objects of interest. The proposed framework encompasses two supplementary road network representations, namely a two-dimensional representation and a graph...... representation. These capture aspects of the problem domain that are required in order to support the querying that underlies the envisioned location-based services....
Detecting new objects and building models with active robot system
Stergaršek Kuzmič, Eva
2010-01-01
An important element of a cognitive robotic system is the ability to detect novel objects and learn their representations, which are suitable for later recognition and manipulation. The basic assumption of our work is that the detection and segmentation of new objects can be facilitated by an active robotic system, which can not only observe the objects but can also manipulate them. Manipulation supports object segmentation and the accumulation of object features, which provides the basis for...
Directory of Open Access Journals (Sweden)
Paulo Cesar Chagas Rodrigues
2017-07-01
Full Text Available Supply chain management, postponement and demand management are one of the operations of strategic importance for the economic success of organizations, in times of economic crisis or not. The objective of this article is to analyze the influence that a mathematical model focused on the management of raw material stocks in a microenterprise with seasonal demand. The research method adopted was of an applied nature, with a quantitative approach and with an exploratory and descriptive objective. The technical procedures adopted were the bibliographical survey, documentary analysis and mathematical modeling. The development of mathematical models for solving inventory management problems may allow managers to observe deviations in trading methods, as well as to support rapid decisions for possible unforeseen market or economic variability.
Ueda, Teruyuki; Honda, Masao; Horimoto, Katsuhisa; Aburatani, Sachiyo; Saito, Shigeru; Yamashita, Taro; Sakai, Yoshio; Nakamura, Mikiko; Takatori, Hajime; Sunagozaka, Hajime; Kaneko, Shuichi
2013-04-01
Gene expression profiling of hepatocellular carcinoma (HCC) and background liver has been studied extensively; however, the relationship between the gene expression profiles of different lesions has not been assessed. We examined the expression profiles of 34 HCC specimens (17 hepatitis B virus [HBV]-related and 17 hepatitis C virus [HCV]-related) and 71 non-tumor liver specimens (36 chronic hepatitis B [CH-B] and 35 chronic hepatitis C [CH-C]) using an in-house cDNA microarray consisting of liver-predominant genes. Graphical Gaussian modeling (GGM) was applied to elucidate the interactions of gene clusters among the HCC and non-tumor lesions. In CH-B-related HCC, the expression of vascular endothelial growth factor-family signaling and regulation of T cell differentiation, apoptosis, and survival, as well as development-related genes was up-regulated. In CH-C-related HCC, the expression of ectodermal development and cell proliferation, wnt receptor signaling, cell adhesion, and defense response genes was also up-regulated. Many of the metabolism-related genes were down-regulated in both CH-B- and CH-C-related HCC. GGM analysis of the HCC and non-tumor lesions revealed that DNA damage response genes were associated with AP1 signaling in non-tumor lesions, which mediates the expression of many genes in CH-B-related HCC. In contrast, signal transducer and activator of transcription 1 and phosphatase and tensin homolog were associated with early growth response protein 1 signaling in non-tumor lesions, which potentially promotes angiogenesis, fibrogenesis, and tumorigenesis in CH-C-related HCC. Gene expression profiling of HCC and non-tumor lesions revealed the predisposing changes of gene expression in HCC. This approach has potential for the early diagnosis and possible prevention of HCC. Copyright © 2013 Elsevier Inc. All rights reserved.
Ebert-Uphoff, Imme; Deng, Yi
2012-10-01
In this paper we introduce a new type of climate network based on temporal probabilistic graphical models. This new method is able to distinguish between direct and indirect connections and thus can eliminate indirect connections in the network. Furthermore, while correlation-based climate networks focus on similarity between nodes, this new method provides an alternative viewpoint by focusing on information flow within the network over time. We build a prototype of this new network utilizing daily values of 500 mb geopotential height over the entire globe during the period 1948 to 2011. The basic network features are presented and compared between boreal winter and summer in terms of intra-location properties that measure local memory at a grid point and inter-location properties that quantify remote impact of a grid point. Results suggest that synoptic-scale, sub-weekly disturbances act as the main information carrier in this network and their intrinsic timescale limits the extent to which a grid point can influence its nearby locations. The frequent passage of these disturbances over storm track regions also uniquely determines the timescale of height fluctuations thus local memory at a grid point. The poleward retreat of synoptic-scale disturbances in boreal summer is largely responsible for a corresponding poleward shift of local maxima in local memory and remote impact, which is most evident in the North Pacific sector. For the NH as a whole, both local memory and remote impact strengthen from winter to summer leading to intensified information flow and more tightly-coupled network nodes during the latter period.
Graphical Turbulence Guidance - Composite
National Oceanic and Atmospheric Administration, Department of Commerce — Forecast turbulence hazards identified by the Graphical Turbulence Guidance algorithm. The Graphical Turbulence Guidance product depicts mid-level and upper-level...
Robust visual tracking of infrared object via sparse representation model
Ma, Junkai; Liu, Haibo; Chang, Zheng; Hui, Bin
2014-11-01
In this paper, we propose a robust tracking method for infrared object. We introduce the appearance model and the sparse representation in the framework of particle filter to achieve this goal. Representing every candidate image patch as a linear combination of bases in the subspace which is spanned by the target templates is the mechanism behind this method. The natural property, that if the candidate image patch is the target so the coefficient vector must be sparse, can ensure our algorithm successfully. Firstly, the target must be indicated manually in the first frame of the video, then construct the dictionary using the appearance model of the target templates. Secondly, the candidate image patches are selected in following frames and the sparse coefficient vectors of them are calculated via l1-norm minimization algorithm. According to the sparse coefficient vectors the right candidates is determined as the target. Finally, the target templates update dynamically to cope with appearance change in the tracking process. This paper also addresses the problem of scale changing and the rotation of the target occurring in tracking. Theoretic analysis and experimental results show that the proposed algorithm is effective and robust.
MODELING OF CONVECTIVE FLOWS IN PNEUMOBASED OBJECTS. Part 1
Directory of Open Access Journals (Sweden)
B. M. Khrustalyov
2014-01-01
Full Text Available A computer modeling process of three-dimensional forced convection proceeding from computation of thermodynamic parameters of pneumo basic buildings (pneumo supported structures is presented. The mathematical model of numerical computation method of temperature and velocity fields, pressure profile in the object is developed using the package Solid works and is provided by grid methods on specified software. Special Navier–Stokes, Clapeyron–Mendeleev, continuity and thermal-conductivity equations are used to calculate parameters in the building with four supply and exhaust channels. Differential equations are presented by algebraic equation systems, initial-boundary conditions are changed by differential conditions for mesh functions and their solutions are performed by algebraic operations. In this article the following is demonstrated: in pneumo basic buildings convective and heat flows are identical structures near the surfaces in unlimited space, but in single-multiply shells (envelopescirculation lines take place, geometrical sizes of which depend on thermal-physical characteristics of gas(airin envelopes, radiation reaction with heated surfaces of envelopes with sphere, earth surface, neighboring buildings. Natural surveys of pneumo-basic buildings of different purposes were carried out in Minsk, in different cities of Belarus and Russia, including temperature fields of external and internal surfaces of air envelopes, relative humidity, thermal (heatflows, radiation characteristics and others.The results of research work are illustrated with diagrams of temperature, velocity, density and pressure dependent on coordinates and time.
International Nuclear Information System (INIS)
Wang, C.C.; Booth, A.W.; Chen, Y.M.; Botlo, M.
1993-06-01
At the Superconducting Super Collider Laboratory (SSCL) a tool called DAQSIM has been developed to study the behavior of Data Acquisition (DAQ) systems. This paper reports and discusses the graphics used in DAQSIM. DAQSIM graphics includes graphical user interface (GUI), animation, debugging, and control facilities. DAQSIM graphics not only provides a convenient DAQ simulation environment, it also serves as an efficient manager in simulation development and verification
Interactive computer graphics and its role in control system design of large space structures
Reddy, A. S. S. R.
1985-01-01
This paper attempts to show the relevance of interactive computer graphics in the design of control systems to maintain attitude and shape of large space structures to accomplish the required mission objectives. The typical phases of control system design, starting from the physical model such as modeling the dynamics, modal analysis, and control system design methodology are reviewed and the need of the interactive computer graphics is demonstrated. Typical constituent parts of large space structures such as free-free beams and free-free plates are used to demonstrate the complexity of the control system design and the effectiveness of the interactive computer graphics.
Object-oriented process dose modeling for glovebox operations
International Nuclear Information System (INIS)
Boerigter, S.T.; Fasel, J.H.; Kornreich, D.E.
1999-01-01
The Plutonium Facility at Los Alamos National Laboratory supports several defense and nondefense-related missions for the country by performing fabrication, surveillance, and research and development for materials and components that contain plutonium. Most operations occur in rooms with one or more arrays of gloveboxes connected to each other via trolley gloveboxes. Minimizing the effective dose equivalent (EDE) is a growing concern as a result of steadily declining allowable dose limits being imposed and a growing general awareness of safety in the workplace. In general, the authors discriminate three components of a worker's total EDE: the primary EDE, the secondary EDE, and background EDE. A particular background source of interest is the nuclear materials vault. The distinction between sources inside and outside of a particular room is arbitrary with the underlying assumption that building walls and floors provide significant shielding to justify including sources in other rooms in the background category. Los Alamos has developed the Process Modeling System (ProMoS) primarily for performing process analyses of nuclear operations. ProMoS is an object-oriented, discrete-event simulation package that has been used to analyze operations at Los Alamos and proposed facilities such as the new fabrication facilities for the Complex-21 effort. In the past, crude estimates of the process dose (the EDE received when a particular process occurred), room dose (the EDE received when a particular process occurred in a given room), and facility dose (the EDE received when a particular process occurred in the facility) were used to obtain an integrated EDE for a given process. Modifications to the ProMoS package were made to utilize secondary dose information to use dose modeling to enhance the process modeling efforts
Louis, Linda
2013-01-01
This article reports on the most recent phase of an ongoing research program that examines the artistic graphic representational behavior and paintings of children between the ages of four and seven. The goal of this research program is to articulate a contemporary account of artistic growth and to illuminate how young children's changing…
A control model for object virtualization in supply chain management
Verdouw, C.N.; Beulens, A.J.M.; Reijers, H.A.; van der Vorst, J.G.A.J.
2015-01-01
Due to the emergence of the Internet of Things, supply chain control can increasingly be based on virtual objects instead of on the direct observation of physical objects. Object virtualization allows the decoupling of control activities from the handling and observing of physical products and
12th International Conference on Computer Graphics Theory and Applications
2017-01-01
The International Conference on Computer Graphics Theory and Applications aims at becoming a major point of contact between researchers, engineers and practitioners in Computer Graphics. The conference will be structured along five main tracks, covering different aspects related to Computer Graphics, from Modelling to Rendering, including Animation, Interactive Environments and Social Agents In Computer Graphics.
RXY/DRXY-a postprocessing graphical system for scientific computation
International Nuclear Information System (INIS)
Jin Qijie
1990-01-01
Scientific computing require computer graphical function for its visualization. The developing objects and functions of a postprocessing graphical system for scientific computation are described, and also briefly described its implementation
Handling Emergency Management in [an] Object Oriented Modeling Environment
Tokgoz, Berna Eren; Cakir, Volkan; Gheorghe, Adrian V.
2010-01-01
It has been understood that protection of a nation from extreme disasters is a challenging task. Impacts of extreme disasters on a nation's critical infrastructures, economy and society could be devastating. A protection plan itself would not be sufficient when a disaster strikes. Hence, there is a need for a holistic approach to establish more resilient infrastructures to withstand extreme disasters. A resilient infrastructure can be defined as a system or facility that is able to withstand damage, but if affected, can be readily and cost-effectively restored. The key issue to establish resilient infrastructures is to incorporate existing protection plans with comprehensive preparedness actions to respond, recover and restore as quickly as possible, and to minimize extreme disaster impacts. Although national organizations will respond to a disaster, extreme disasters need to be handled mostly by local emergency management departments. Since emergency management departments have to deal with complex systems, they have to have a manageable plan and efficient organizational structures to coordinate all these systems. A strong organizational structure is the key in responding fast before and during disasters, and recovering quickly after disasters. In this study, the entire emergency management is viewed as an enterprise and modelled through enterprise management approach. Managing an enterprise or a large complex system is a very challenging task. It is critical for an enterprise to respond to challenges in a timely manner with quick decision making. This study addresses the problem of handling emergency management at regional level in an object oriented modelling environment developed by use of TopEase software. Emergency Operation Plan of the City of Hampton, Virginia, has been incorporated into TopEase for analysis. The methodology used in this study has been supported by a case study on critical infrastructure resiliency in Hampton Roads.
Arvo, James
1991-01-01
Graphics Gems II is a collection of articles shared by a diverse group of people that reflect ideas and approaches in graphics programming which can benefit other computer graphics programmers.This volume presents techniques for doing well-known graphics operations faster or easier. The book contains chapters devoted to topics on two-dimensional and three-dimensional geometry and algorithms, image processing, frame buffer techniques, and ray tracing techniques. The radiosity approach, matrix techniques, and numerical and programming techniques are likewise discussed.Graphics artists and comput
Energy Technology Data Exchange (ETDEWEB)
Schulze-Riegert, R.; Krosche, M.; Stekolschikov, K. [Scandpower Petroleum Technology GmbH, Hamburg (Germany); Fahimuddin, A. [Technische Univ. Braunschweig (Germany)
2007-09-13
History Matching in Reservoir Simulation, well location and production optimization etc. is generally a multi-objective optimization problem. The problem statement of history matching for a realistic field case includes many field and well measurements in time and type, e.g. pressure measurements, fluid rates, events such as water and gas break-throughs, etc. Uncertainty parameters modified as part of the history matching process have varying impact on the improvement of the match criteria. Competing match criteria often reduce the likelihood of finding an acceptable history match. It is an engineering challenge in manual history matching processes to identify competing objectives and to implement the changes required in the simulation model. In production optimization or scenario optimization the focus on one key optimization criterion such as NPV limits the identification of alternatives and potential opportunities, since multiple objectives are summarized in a predefined global objective formulation. Previous works primarily focus on a specific optimization method. Few works actually concentrate on the objective formulation and multi-objective optimization schemes have not yet been applied to reservoir simulations. This paper presents a multi-objective optimization approach applicable to reservoir simulation. It addresses the problem of multi-objective criteria in a history matching study and presents analysis techniques identifying competing match criteria. A Pareto-Optimizer is discussed and the implementation of that multi-objective optimization scheme is applied to a case study. Results are compared to a single-objective optimization method. (orig.)
Modeling of equilibrium hollow objects stabilized by electrostatics.
Mani, Ethayaraja; Groenewold, Jan; Kegel, Willem K
2011-05-18
The equilibrium size of two largely different kinds of hollow objects behave qualitatively differently with respect to certain experimental conditions. Yet, we show that they can be described within the same theoretical framework. The objects we consider are 'minivesicles' of ionic and nonionic surfactant mixtures, and shells of Keplerate-type polyoxometalates. The finite-size of the objects in both systems is manifested by electrostatic interactions. We emphasize the importance of constant charge and constant potential boundary conditions. Taking these conditions into account, indeed, leads to the experimentally observed qualitatively different behavior of the equilibrium size of the objects.
Modeling of equilibrium hollow objects stabilized by electrostatics
Energy Technology Data Exchange (ETDEWEB)
Mani, Ethayaraja; Groenewold, Jan; Kegel, Willem K, E-mail: w.k.kegel@uu.nl [Van' t Hoff Laboratory for Physical and Colloid Chemistry, Debye Institute, Utrecht University, Padualaan 8, 3584 CH Utrecht (Netherlands)
2011-05-18
The equilibrium size of two largely different kinds of hollow objects behave qualitatively differently with respect to certain experimental conditions. Yet, we show that they can be described within the same theoretical framework. The objects we consider are 'minivesicles' of ionic and nonionic surfactant mixtures, and shells of Keplerate-type polyoxometalates. The finite-size of the objects in both systems is manifested by electrostatic interactions. We emphasize the importance of constant charge and constant potential boundary conditions. Taking these conditions into account, indeed, leads to the experimentally observed qualitatively different behavior of the equilibrium size of the objects.
A Multi-objective Model for Transmission Planning Under Uncertainties
DEFF Research Database (Denmark)
Zhang, Chunyu; Wang, Qi; Ding, Yi
2014-01-01
trading and regulating in transmission level. In this paper, the aggregator caused uncertainty is analyzed first considering DERs’ correlation. During the transmission planning, a scenario-based multi-objective transmission planning (MOTP) framework is proposed to simultaneously optimize two objectives, i.......e. the cost of power purchase and network expansion, and the revenue of power delivery. A two-phase multi-objective PSO (MOPSO) algorithm is employed to be the solver. The feasibility of the proposed multi-objective planning approach has been verified by the 77-bus system linked with 38-bus distribution...
Guzmán, Silvina; Carniel, Roberto; Caffe, Pablo J.
2014-03-01
AFC3D is an original graphical free software developed in the framework of the R scientific environment and dedicated to the modelling of assimilation and fractional crystallization without (AFC) and with (AFC-r) recharge, facilitating the search for the solutions of the equations originally proposed by DePaolo (1981, 1985) and first solved in a graphical way by Aitcheson and Forrest (1994). The software presented here allows a graphical 3D representation of ρ (mass of assimilated crust/mass of original magma) as a function of r (rate of crustal assimilation/rate of fractional crystallization) and β (recharge rate of magma replenishment / rate of assimilation) for each element/isotope, finding a coherent set of (r, β, ρ) parameter triples in a mostly automated way. Mathematically optimized solutions are derived, which can and should then be discussed and evaluated from a geological and petrological point of view by the end user. The presented contribution presents the software and a series of models published in the literature, which are discussed as case studies of application and whose solutions are sometimes enhanced based on the results provided by the software.
Modelling the Semantics of Object Prepositions Inside Spanish NPs
DEFF Research Database (Denmark)
Høeg Müller, Henrik
2016-01-01
This paper presents a cognitive semantic framework for understanding the principles that govern the use of object prepositions inside NPs with event nominalizations (Grimshaw 1990) as heads......This paper presents a cognitive semantic framework for understanding the principles that govern the use of object prepositions inside NPs with event nominalizations (Grimshaw 1990) as heads...
DDP-516 Computer Graphics System Capabilities
1972-06-01
This report describes the capabilities of the DDP-516 Computer Graphics System. One objective of this report is to acquaint DOT management and project planners with the system's current capabilities, applications hardware and software. The Appendix i...
Keegan, Ronan M; McNicholas, Stuart J; Thomas, Jens M H; Simpkin, Adam J; Simkovic, Felix; Uski, Ville; Ballard, Charles C; Winn, Martyn D; Wilson, Keith S; Rigden, Daniel J
2018-03-01
Increasing sophistication in molecular-replacement (MR) software and the rapid expansion of the PDB in recent years have allowed the technique to become the dominant method for determining the phases of a target structure in macromolecular X-ray crystallography. In addition, improvements in bioinformatic techniques for finding suitable homologous structures for use as MR search models, combined with developments in refinement and model-building techniques, have pushed the applicability of MR to lower sequence identities and made weak MR solutions more amenable to refinement and improvement. MrBUMP is a CCP4 pipeline which automates all stages of the MR procedure. Its scope covers everything from the sourcing and preparation of suitable search models right through to rebuilding of the positioned search model. Recent improvements to the pipeline include the adoption of more sensitive bioinformatic tools for sourcing search models, enhanced model-preparation techniques including better ensembling of homologues, and the use of phase improvement and model building on the resulting solution. The pipeline has also been deployed as an online service through CCP4 online, which allows its users to exploit large bioinformatic databases and coarse-grained parallelism to speed up the determination of a possible solution. Finally, the molecular-graphics application CCP4mg has been combined with MrBUMP to provide an interactive visual aid to the user during the process of selecting and manipulating search models for use in MR. Here, these developments in MrBUMP are described with a case study to explore how some of the enhancements to the pipeline and to CCP4mg can help to solve a difficult case.
A Multi-objective Model for Transmission Planning Under Uncertainties
DEFF Research Database (Denmark)
Zhang, Chunyu; Wang, Qi; Ding, Yi
2014-01-01
trading and regulating in transmission level. In this paper, the aggregator caused uncertainty is analyzed first considering DERs’ correlation. During the transmission planning, a scenario-based multi-objective transmission planning (MOTP) framework is proposed to simultaneously optimize two objectives, i......The significant growth of distributed energy resources (DERs) associated with smart grid technologies has prompted excessive uncertainties in the transmission system. The most representative is the novel notation of commercial aggregator who has lighted a bright way for DERs to participate power.......e. the cost of power purchase and network expansion, and the revenue of power delivery. A two-phase multi-objective PSO (MOPSO) algorithm is employed to be the solver. The feasibility of the proposed multi-objective planning approach has been verified by the 77-bus system linked with 38-bus distribution...
Intelligent Computer Graphics 2012
Miaoulis, Georgios
2013-01-01
In Computer Graphics, the use of intelligent techniques started more recently than in other research areas. However, during these last two decades, the use of intelligent Computer Graphics techniques is growing up year after year and more and more interesting techniques are presented in this area. The purpose of this volume is to present current work of the Intelligent Computer Graphics community, a community growing up year after year. This volume is a kind of continuation of the previously published Springer volumes “Artificial Intelligence Techniques for Computer Graphics” (2008), “Intelligent Computer Graphics 2009” (2009), “Intelligent Computer Graphics 2010” (2010) and “Intelligent Computer Graphics 2011” (2011). Usually, this kind of volume contains, every year, selected extended papers from the corresponding 3IA Conference of the year. However, the current volume is made from directly reviewed and selected papers, submitted for publication in the volume “Intelligent Computer Gr...
Deterministic Graphical Games Revisited
DEFF Research Database (Denmark)
Andersson, Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro
2008-01-01
We revisit the deterministic graphical games of Washburn. A deterministic graphical game can be described as a simple stochastic game (a notion due to Anne Condon), except that we allow arbitrary real payoffs but disallow moves of chance. We study the complexity of solving deterministic graphical...... games and obtain an almost-linear time comparison-based algorithm for computing an equilibrium of such a game. The existence of a linear time comparison-based algorithm remains an open problem....
Energy Technology Data Exchange (ETDEWEB)
Hagopian, Sharon; Youssef, Saul [Florida State Univ., Tallahassee, FL (United States); Shupe, Michael [Arizona Univ., Tucson, AZ (United States); Graf, Norman [Brookhaven National Lab., Upton, NY (United States); Oshima, Nobuaki [Fermi National Accelerator Lab., Batavia, IL (United States); Adams, David L. [Rice Univ., Houston, TX (United States)
1996-07-01
The DO experiment at the Fermilab Tevatron Collider is preparing for major revisions of all of its software: data structures, databases, user interfaces, and graphics. We report here on the progress of the D0 Graphics Working Group, which has considered the requirements of D0 for interactive event displays and their role in the process of detector debugging and physics analysis. This report will include studies done by the group, and the evolving view of the future of D0 graphics. (author)
International Nuclear Information System (INIS)
Allensworth, J.A.
1984-04-01
EASI (Estimate of Adversary Sequence Interruption) is an analytical technique for measuring the effectiveness of physical protection systems. EASI Graphics is a computer graphics extension of EASI which provides a capability for performing sensitivity and trade-off analyses of the parameters of a physical protection system. This document reports on the implementation of the Version II of EASI Graphics and illustrates its application with some examples. 5 references, 15 figures, 6 tables
Geel, C.R.; Donselaar, M.E.
2007-01-01
Object-based stochastic modelling techniques are routinely employed to generate multiple realisations of the spatial distribution of sediment properties in settings where data density is insufficient to construct a unique deterministic facies architecture model. Challenge is to limit the wide range
The computer graphics interface
Steinbrugge Chauveau, Karla; Niles Reed, Theodore; Shepherd, B
2014-01-01
The Computer Graphics Interface provides a concise discussion of computer graphics interface (CGI) standards. The title is comprised of seven chapters that cover the concepts of the CGI standard. Figures and examples are also included. The first chapter provides a general overview of CGI; this chapter covers graphics standards, functional specifications, and syntactic interfaces. Next, the book discusses the basic concepts of CGI, such as inquiry, profiles, and registration. The third chapter covers the CGI concepts and functions, while the fourth chapter deals with the concept of graphic obje
The computer graphics metafile
Henderson, LR; Shepherd, B; Arnold, D B
1990-01-01
The Computer Graphics Metafile deals with the Computer Graphics Metafile (CGM) standard and covers topics ranging from the structure and contents of a metafile to CGM functionality, metafile elements, and real-world applications of CGM. Binary Encoding, Character Encoding, application profiles, and implementations are also discussed. This book is comprised of 18 chapters divided into five sections and begins with an overview of the CGM standard and how it can meet some of the requirements for storage of graphical data within a graphics system or application environment. The reader is then intr
Modeling Spatial Data within Object Relational-Databases
Directory of Open Access Journals (Sweden)
Iuliana BOTHA
2011-03-01
Full Text Available Spatial data can refer to elements that help place a certain object in a certain area. These elements are latitude, longitude, points, geometric figures represented by points, etc. However, when translating these elements into data that can be stored in a computer, it all comes down to numbers. The interesting part that requires attention is how to memorize them in order to obtain fast and various spatial queries. This part is where the DBMS (Data Base Management System that contains the database acts in. In this paper, we analyzed and compared two object-relational DBMS that work with spatial data: Oracle and PostgreSQL.
Modelling object typicality in description logics - [Workshop on Description Logics
CSIR Research Space (South Africa)
Britz, K
2009-07-01
Full Text Available The authors presents a semantic model of typicality of concept members in description logics that accords well with a binary, globalist cognitive model of class membership and typicality. The authors define a general preferential semantic framework...
Сontrol systems using mathematical models of technological objects ...
African Journals Online (AJOL)
development of models of chemical-processing facilities is a method of heat flow calorimetry for the creation of kinetic models of processes in multiphase systems and the use of simplified hydrodynamics models to describe mass transport processes. The demonstration of the successful application of this approach for ...
The CTQ flowdown as a conceptual model of project objectives
de Koning, H.; de Mast, J.
2007-01-01
The purpose of this article is to describe and clarify a tool that is at the core of the definition phase of most quality improvement projects. This tool is called the critical to quality (CTQ) flowdown. It relates high-level strategic focal points to project objectives. In their turn project
Using Graphic Organizers in Intercultural Education
Ciascai, Liliana
2009-01-01
Graphic organizers are instruments of representation, illustration and modeling of information. In the educational practice they are used for building, and systematization of knowledge. Graphic organizers are instruments that addressed mostly visual learning style, but their use is beneficial to all learners. In this paper we illustrate the use of…
2D Modeling and Classification of Extended Objects in a Network of HRR Radars
Fasoula, A.
2011-01-01
In this thesis, the modeling of extended objects with low-dimensional representations of their 2D geometry is addressed. The ultimate objective is the classification of the objects using libraries of such compact 2D object models that are much smaller than in the state-of-the-art classification
Modeling 3D Objects for Navigation Purposes Using Laser Scanning
Directory of Open Access Journals (Sweden)
Cezary Specht
2016-07-01
Full Text Available The paper discusses the creation of 3d models and their applications in navigation. It contains a review of available methods and geometric data sources, focusing mostly on terrestrial laser scanning. It presents detailed description, from field survey to numerical elaboration, how to construct accurate model of a typical few storey building as a hypothetical reference in complex building navigation. Hence, the paper presents fields where 3d models are being used and their potential new applications.
Automation of program model developing for complex structure control objects
International Nuclear Information System (INIS)
Ivanov, A.P.; Sizova, T.B.; Mikhejkina, N.D.; Sankovskij, G.A.; Tyufyagin, A.N.
1991-01-01
A brief description of software for automated developing the models of integrating modular programming system, program module generator and program module library providing thermal-hydraulic calcualtion of process dynamics in power unit equipment components and on-line control system operation simulation is given. Technical recommendations for model development are based on experience in creation of concrete models of NPP power units. 8 refs., 1 tab., 4 figs
Animated GIFs as vernacular graphic design
DEFF Research Database (Denmark)
Gürsimsek, Ödül Akyapi
2016-01-01
Online television audiences create a variety of digital content on the internet. Fans of television production design produce and share such content to express themselves and engage with the objects of their interest. These digital expressions, which exist in the form of graphics, text, videos...... as design, both in the sense that multimodal meaning making is an act of design and in the sense that web-based graphics are designed graphics that are created through a design process. She specifically focuses on the transmedia television production entitled Lost and analyzes the design of animated GIFs...... and often a mix of some of these modes, seem to enable participatory conversations by the audience communities that continue over a period of time. One example of such multimodal digital content is the graphic format called the animated GIF (graphics interchange format). This article focuses on content...
Lindsey, Patricia F.
1994-01-01
In microgravity conditions mobility is greatly enhanced and body stability is difficult to achieve. Because of these difficulties, optimum placement and accessibility of objects and controls can be critical to required tasks on board shuttle flights or on the proposed space station. Anthropometric measurement of the maximum reach of occupants of a microgravity environment provide knowledge about maximum functional placement for tasking situations. Calculations for a full body, functional reach envelope for microgravity environments are imperative. To this end, three dimensional computer modeled human figures, providing a method of anthropometric measurement, were used to locate the data points that define the full body, functional reach envelope. Virtual reality technology was utilized to enable an occupant of the microgravity environment to experience movement within the reach envelope while immersed in a simulated microgravity environment.
Interactive Graphic Journalism
Schlichting, Laura
2016-01-01
textabstractThis paper examines graphic journalism (GJ) in a transmedial context, and argues that transmedial graphic journalism (TMGJ) is an important and fruitful new form of visual storytelling, that will re-invigorate the field of journalism, as it steadily tests out and plays with new media,
Mathematics for computer graphics
Vince, John
2006-01-01
Helps you understand the mathematical ideas used in computer animation, virtual reality, CAD, and other areas of computer graphics. This work also helps you to rediscover the mathematical techniques required to solve problems and design computer programs for computer graphic applications
Prosise, Jeff
This document presents the principles behind modern computer graphics without straying into the arcane languages of mathematics and computer science. Illustrations accompany the clear, step-by-step explanations that describe how computers draw pictures. The 22 chapters of the book are organized into 5 sections. "Part 1: Computer Graphics in…
Validation of a multi-objective, predictive urban traffic model
Wilmink, I.R.; Haak, P. van den; Woldeab, Z.; Vreeswijk, J.
2013-01-01
This paper describes the results of the verification and validation of the ecoStrategic Model, which was developed, implemented and tested in the eCoMove project. The model uses real-time and historical traffic information to determine the current, predicted and desired state of traffic in a
Selecting personnel to work on the interactive graphics system
Energy Technology Data Exchange (ETDEWEB)
Norton, F.J.
1979-11-30
The paper established criteria for the selection of personnel to work on the interactive graphics system and mentions some of human behavioral patterns that are created by the implementation of graphic systems. Some of the social and educational problems associated with the interactive graphics system will be discussed. The project also provided for collecting objective data which would be useful in assessing the benefits of interactive graphics systems.
Selecting personnel to work on the interactive graphics system
International Nuclear Information System (INIS)
Norton, F.J.
1979-01-01
The paper established criteria for the selection of personnel to work on the interactive graphics system and mentions some of human behavioral patterns that are created by the implementation of graphic systems. Some of the social and educational problems associated with the interactive graphics system will be discussed. The project also provided for collecting objective data which would be useful in assessing the benefits of interactive graphics systems
DEFF Research Database (Denmark)
Nielsen, Tine; Kreiner, Svend
2011-01-01
. For self-assessment, self-scoring and self-interpretational purposes it is deemed prudent that subscales measuring comparable constructs are of the same item length. Consequently, in order to obtain a self-assessment version of the R-D-LSI with an equal number of items in each subscale, a systematic...... approach to item reduction based on results of graphical loglinear Rasch modeling (GLLRM) was designed. This approach was then used to reduce the number of items in the subscales of the R-D-LSI which had an item-length of more than seven items, thereby obtaining the Danish Self-Assessment Learning Styles...
Theory and practice of Auto CAD, computer graphics
International Nuclear Information System (INIS)
Hwang, Si Won; Choe, Hong Yeong; Shin, Jae Yeon; Lee, Ryong Cheol
1990-08-01
This book describes theory and practice of Auto CAD, computer graphics, which deals with peripheral of computer, occurrence of digital line by DDA, BRM, theory of conversion, data base and display and shape modeling. This book gives descriptions of outline of CAD system, Auto CAD, basic function practice, simple figure practice, the third angle projection drawing a little complex single object, machine drawing I, function practice of improved Auto CAD, edit, set up layer, and 3D, and 3D display function.
Validation of the PESTLA model: Definitions, objectives and procedure
Boekhold AE; van den Bosch H; Boesten JJTI; Leistra M; Swartjes FA; van der Linden AMA
1993-01-01
The simulation model PESTLA was developed to produce estimates of accumulation and leaching of pesticides in soil to facilitate classification of pesticides in the Dutch registration procedure. Before PESTLA can be used for quantitative assessment of expected pesticide concentrations in
OBJECT ORIENTED MODELLING, A MODELLING METHOD OF AN ECONOMIC ORGANIZATION ACTIVITY
Directory of Open Access Journals (Sweden)
TĂNĂSESCU ANA
2014-05-01
Full Text Available Now, most economic organizations use different information systems types in order to facilitate their activity. There are different methodologies, methods and techniques that can be used to design information systems. In this paper, I propose to present the advantages of using the object oriented modelling at the information system design of an economic organization. Thus, I have modelled the activity of a photo studio, using Visual Paradigm for UML as a modelling tool. For this purpose, I have identified the use cases for the analyzed system and I have presented the use case diagram. I have, also, realized the system static and dynamic modelling, through the most known UML diagrams.
Deformable object model and simulation. Application to lung cancer treatment
International Nuclear Information System (INIS)
Baudet, V.
2006-06-01
Ionising treatment against cancers such as conformal radiotherapy and hadron therapy are set with error margins that take into account statistics of tumour motions, for instance. We are looking for reducing these margins by searching deformable models that would simulate displacements occurring in lungs during a treatment. It must be personalized with the geometry obtained from CT scans of the patient and also it must be parameterized with physiological measures of the patient. In this Ph. D. thesis, we decided to use a mass-spring system to model lungs because of its fast and physically realist deformations obtained in animation. As a starting point, we chose the model proposed by Van Gelder in order to parameterize a mass-spring system with rheological characteristics of an homogeneous, linear elastic isotropic material in two dimensions (2D). However, we tested this model and proved it was false. Hence we did a Lagrangian study in order to obtain a parametric model with rectangular in 2D (cubic in 3D) elements. We also determined the robustness by testing with stretching, inflating, shearing and bending experiments and also by comparing results with other infinite element method. Thus, in this Ph.D. thesis, we explain how to obtain this parametric model, and how it will be linked to physiological data and how accurate it will be. (author)
Imagined Spaces: Motion Graphics in Performance Spaces
DEFF Research Database (Denmark)
Steijn, Arthur
2016-01-01
In this chapter I introduce the first steps in my work with adjoining and developing concepts relevant to the study and practical design of motion graphics in spatial experience design; performance, event and exhibition design. Based on a presentation of a practical case where motion graphics...... are used in performance design, the chapter portrays the work in progress on a design model for designing spatial experiences in performances through the use of motion graphics. The purpose of the model is to systematize and categorize different design elements e.g. space, line and shape, tone, colour......, space, movement, and rhythm, in relation to e.g. expression and atmosphere, to be considered when designing and analyzing motion graphics in performance design, one kind of spatial experience design. The analysis of the case, here a dance performance utilizing video projected motion graphics, isbe done...
Jeddah Historical Building Information Modelling "JHBIM" - Object Library
Baik, A.; Alitany, A.; Boehm, J.; Robson, S.
2014-05-01
The theory of using Building Information Modelling "BIM" has been used in several Heritage places in the worldwide, in the case of conserving, documenting, managing, and creating full engineering drawings and information. However, one of the most serious issues that facing many experts in order to use the Historical Building Information Modelling "HBIM", is creating the complicated architectural elements of these Historical buildings. In fact, many of these outstanding architectural elements have been designed and created in the site to fit the exact location. Similarly, this issue has been faced the experts in Old Jeddah in order to use the BIM method for Old Jeddah historical Building. Moreover, The Saudi Arabian City has a long history as it contains large number of historic houses and buildings that were built since the 16th century. Furthermore, the BIM model of the historical building in Old Jeddah always take a lot of time, due to the unique of Hijazi architectural elements and no such elements library, which have been took a lot of time to be modelled. This paper will focus on building the Hijazi architectural elements library based on laser scanner and image survey data. This solution will reduce the time to complete the HBIM model and offering in depth and rich digital architectural elements library to be used in any heritage projects in Al-Balad district, Jeddah City.
Multi-objective optimisation for musculoskeletal modelling: application to a planar elbow model.
Dumas, Raphaël; Moissenet, Florent; Lafon, Yoann; Cheze, Laurence
2014-10-01
One of the open issues in musculoskeletal modelling remains the choice of the objective function that is used to solve the muscular redundancy problem. Some authors have recently proposed to introduce joint reaction forces in the objective function, and the question of the weights associated with musculo-tendon forces and joint reaction forces arose. This question typically deals with a multi-objective optimisation problem. The aim of this study is to illustrate, on a planar elbow model, the ensemble of optimal solutions (i.e. Pareto front) and the solution of a global objective method that represent different compromises between musculo-tendon forces, joint compression force, and joint shear force. The solutions of the global objective method, based either on the minimisation of the sum of the squared musculo-tendon forces alone or on the minimisation of the squared joint compression force and shear force together, are in the same range. Minimising either the squared joint compression force or shear force alone leads to extreme force values. The exploration of the compromises between these forces illustrates the existence of major interactions between the muscular and joint structures. Indeed, the joint reaction forces relate to the projection of the sum of the musculo-tendon forces. An illustration of these interactions, due to the projection relation, is that the Pareto front is not a large surface, like in a typical three-objective optimisation, but almost a curve. These interactions, and the possibility to take them into account by a multi-objective optimisation, seem essential for the application of musculoskeletal modelling to joint pathologies. © IMechE 2014.
Modeling the drift of objects floating in the sea
Nof, D.; Girihagama, L. N.
2016-02-01
The question how buoyant objects drift and where are they ultimately washed ashore must have troubled humans since the beginning of civilization. A good summary of the observational aspect of the problem is given in Ebbesmeyer (2015) and the references given therein. It includes the journey of shoes originally housed in containers that were accidently swept from the deck of cargo ships to the ocean as well as the famous world war two case of a corpse released by the British Counter Intelligence agency near the Spanish Coast. Of practical modern importance is the question how did the flaperon, belonging to the Malaysian Airplane lost last year (supposedly over the Indian Ocean near Western Australia), travelled almost across the entire Indian Ocean in just 15 months (corresponding to the very high speed of six centimeters per-second, about three times the speed of most ocean currents away from boundaries). Traditionally, it has been thought that three processes affect the drift-ocean currents, surface waves and wind. Of these, the last two are usually regarded as small. The waves effect (Stokes drift) is nonlinear and is probably indeed very small in most cases because the amplitudes are small. It is not so easy to estimate the wind effect and we will argue here that it is not necessarily small though it is obviously close to zero in some cases. The wind speed is typically two orders of magnitude faster than the water (meters per second compared to centimeters per second) and the stress is proportional to the square of the wind speed implying that the wind is important even if only a very small portion of the object protrudes above the sea-level. It is argued that wind, rather than ocean current dominated the drift of both the WWII corpse and the modern day flaperon.
Flying, Feathery and Beaked Objects: Children's Mental Models about Birds
Ahi, Berat
2016-01-01
Purpose of this research is to state preschool students' mental model about birds by analyzing their drawing. This is a hermeneutical phenomenology research that is based on social constructivist philosophy. Typical case sampling method has used in order to form working group of this research. Working group consisting of 325 children who are in…
A tri-objective, dynamic weapon assignment model for surface ...
African Journals Online (AJOL)
2015-05-11
May 11, 2015 ... of available surface-based weapon systems to engage aerial threats in an attempt to protect defended surface ...... time stages to include in the fixed mean calculation in (2) be fixed to the minimum length of a FW. ... to solve the model in 139 seconds on an Intel Core i7-4770 processor with 8GB of random.
Optimization Modelling for Multi-Objective Supply Chains, A Case ...
African Journals Online (AJOL)
In this study a mathematical model was developed for minimizing the distribution cost in a multi-product supply chain system. The oil and gas sector was studied to understand the underlying supply chain system. Attempt was made to identify system parameters, variables, limitations, criteria so as to be able to define the ...
Integration of a Three-Dimensional Process-Based Hydrological Model into the Object Modeling System
Directory of Open Access Journals (Sweden)
Giuseppe Formetta
2016-01-01
Full Text Available The integration of a spatial process model into an environmental modeling framework can enhance the model’s capabilities. This paper describes a general methodology for integrating environmental models into the Object Modeling System (OMS regardless of the model’s complexity, the programming language, and the operating system used. We present the integration of the GEOtop model into the OMS version 3.0 and illustrate its application in a small watershed. OMS is an environmental modeling framework that facilitates model development, calibration, evaluation, and maintenance. It provides innovative techniques in software design such as multithreading, implicit parallelism, calibration and sensitivity analysis algorithms, and cloud-services. GEOtop is a physically based, spatially distributed rainfall-runoff model that performs three-dimensional finite volume calculations of water and energy budgets. Executing GEOtop as an OMS model component allows it to: (1 interact directly with the open-source geographical information system (GIS uDig-JGrass to access geo-processing, visualization, and other modeling components; and (2 use OMS components for automatic calibration, sensitivity analysis, or meteorological data interpolation. A case study of the model in a semi-arid agricultural catchment is presented for illustration and proof-of-concept. Simulated soil water content and soil temperature results are compared with measured data, and model performance is evaluated using goodness-of-fit indices. This study serves as a template for future integration of process models into OMS.
Model-based recognition of 3-D objects by geometric hashing technique
International Nuclear Information System (INIS)
Severcan, M.; Uzunalioglu, H.
1992-09-01
A model-based object recognition system is developed for recognition of polyhedral objects. The system consists of feature extraction, modelling and matching stages. Linear features are used for object descriptions. Lines are obtained from edges using rotation transform. For modelling and recognition process, geometric hashing method is utilized. Each object is modelled using 2-D views taken from the viewpoints on the viewing sphere. A hidden line elimination algorithm is used to find these views from the wire frame model of the objects. The recognition experiments yielded satisfactory results. (author). 8 refs, 5 figs
Applying CIPP Model for Learning-Object Management
Morgado, Erla M. Morales; Peñalvo, Francisco J. García; Martín, Carlos Muñoz; Gonzalez, Miguel Ángel Conde
Although knowledge management process needs to receive some evaluation in order to determine their suitable functionality. There is not a clear definition about the stages where LOs need to be evaluated and the specific metrics to continuously promote their quality. This paper presents a proposal for LOs evaluation during their management for e-learning systems. To achieve this, we suggest specific steps for LOs design, implementation and evaluation into the four stages proposed by CIPP model (Context, Input, Process, Product).
Waste collection multi objective model with real time traceability data.
Faccio, Maurizio; Persona, Alessandro; Zanin, Giorgia
2011-12-01
Waste collection is a highly visible municipal service that involves large expenditures and difficult operational problems, plus it is expensive to operate in terms of investment costs (i.e. vehicles fleet), operational costs (i.e. fuel, maintenances) and environmental costs (i.e. emissions, noise and traffic congestions). Modern traceability devices, like volumetric sensors, identification RFID (Radio Frequency Identification) systems, GPRS (General Packet Radio Service) and GPS (Global Positioning System) technology, permit to obtain data in real time, which is fundamental to implement an efficient and innovative waste collection routing model. The basic idea is that knowing the real time data of each vehicle and the real time replenishment level at each bin makes it possible to decide, in function of the waste generation pattern, what bin should be emptied and what should not, optimizing different aspects like the total covered distance, the necessary number of vehicles and the environmental impact. This paper describes a framework about the traceability technology available in the optimization of solid waste collection, and introduces an innovative vehicle routing model integrated with the real time traceability data, starting the application in an Italian city of about 100,000 inhabitants. The model is tested and validated using simulation and an economical feasibility study is reported at the end of the paper. Copyright © 2011 Elsevier Ltd. All rights reserved.
Pratt, Nathan S; Ellison, Brenna D; Benjamin, Aaron S; Nakamura, Manabu T
2016-01-01
Consumers have difficulty using nutrition information. We hypothesized that graphically delivering information of select nutrients relative to a target would allow individuals to process information in time-constrained settings more effectively than numerical information. Objectives of the study were to determine the efficacy of the graphical method in (1) improving memory of nutrient information and (2) improving consumer purchasing behavior in a restaurant. Values of fiber and protein per calorie were 2-dimensionally plotted alongside a target box. First, a randomized cued recall experiment was conducted (n=63). Recall accuracy of nutrition information improved by up to 43% when shown graphically instead of numerically. Second, the impact of graphical nutrition signposting on diner choices was tested in a cafeteria. Saturated fat and sodium information was also presented using color coding. Nutrient content of meals (n=362) was compared between 3 signposting phases: graphical, nutrition facts panels (NFP), or no nutrition label. Graphical signposting improved nutrient content of purchases in the intended direction, whereas NFP had no effect compared with the baseline. Calories ordered from total meals, entrées, and sides were significantly less during graphical signposting than no-label and NFP periods. For total meal and entrées, protein per calorie purchased was significantly higher and saturated fat significantly lower during graphical signposting than the other phases. Graphical signposting remained a predictor of calories and protein per calorie purchased in regression modeling. These findings demonstrate that graphically presenting nutrition information makes that information more available for decision making and influences behavior change in a realistic setting. Copyright © 2016 Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Fonseca, Telma Cristina Ferreira
2009-01-01
The Intensity Modulated Radiation Therapy - IMRT is an advanced treatment technique used worldwide in oncology medicine branch. On this master proposal was developed a software package for simulating the IMRT protocol, namely SOFT-RT which attachment the research group 'Nucleo de Radiacoes Ionizantes' - NRI at UFMG. The computational system SOFT-RT allows producing the absorbed dose simulation of the radiotherapic treatment through a three-dimensional voxel model of the patient. The SISCODES code, from NRI, research group, helps in producing the voxel model of the interest region from a set of CT or MRI digitalized images. The SOFT-RT allows also the rotation and translation of the model about the coordinate system axis for better visualization of the model and the beam. The SOFT-RT collects and exports the necessary parameters to MCNP code which will carry out the nuclear radiation transport towards the tumor and adjacent healthy tissues for each orientation and position of the beam planning. Through three-dimensional visualization of voxel model of a patient, it is possible to focus on a tumoral region preserving the whole tissues around them. It takes in account where exactly the radiation beam passes through, which tissues are affected and how much dose is applied in both tissues. The Out-module from SOFT-RT imports the results and express the dose response superimposing dose and voxel model in gray scale in a three-dimensional graphic representation. The present master thesis presents the new computational system of radiotherapic treatment - SOFT-RT code which has been developed using the robust and multi-platform C ++ programming language with the OpenGL graphics packages. The Linux operational system was adopted with the goal of running it in an open source platform and free access. Preliminary simulation results for a cerebral tumor case will be reported as well as some dosimetric evaluations. (author)
Cooperation Models, Motivation and Objectives behind Farm–School Collaboration
DEFF Research Database (Denmark)
Dyg, Pernille Malberg; Mikkelsen, Bent Egberg
2016-01-01
people and their ability to understand the food system. Thus, efforts are made to promote food literacy through strengthening of farm–school links. The case-study research from Denmark investigates existing cooperation arrangements in farm–school collaboration and the underlying motivation of the farmers...... propose more generic collaboration models of farm–school collaboration to characterize the field: from short-term, informal cooperation involving just a farmer and a teacher to longer-term and closer collaboration involving several teachers, farms, schools or other stakeholders from a foodscapes approach...
The Atavistic Model of Cancer: Evidence, Objections, Therapeutic Value
Lineweaver, Charles
2014-03-01
As cancer progresses tumor cells dedifferentiate. In the atavistic model this dedifferentiation is interpreted as a reversion to phylogenetically earlier capabilities (Davies & Lineweaver 2011). Since there is an identifiable order to the evolution of capabilities, the more recently evolved capabilities are more likely to be compromised first during cancer progression. A loss of capabilities based on the phylogenetic order of evolution suggests a therapeutic strategy for targeting cancer - design challenges that can only be met by the recently evolved capabilities still intact in normal cells, but lost in cancer cells. Such a target-the-weakness therapeutic strategy contrasts with most current therapies that target the main strength of cancer: cell proliferation. Here, we describe several examples of this target-the-weakness strategy. Our most detailed example involves the immune system. As cancer progresses, the atavistic model suggests that cancer cells lose contact with the more recently evolved adaptive immune system of the host (the basis of vaccination). The absence of adaptive immunity in immunosuppressed tumor environments is an irreversible weakness of cancer that can be exploited by creating a challenge that only the presence of adaptive immunity can meet. Thus, we propose the post-vaccination inoculation of disease at dosages that the recently evolved (and vaccination-primed) adaptive immune system will be able to destroy in normal cells, but not in the immunosuppressed microenvironment of tumor cells. Co-author: Paul Davies (Arizona State University)
International Nuclear Information System (INIS)
Balashov, V.K.
1991-01-01
The structure of the software for computer graphics at VAX JINR is described. It consists of graphical packages GKS, WAND and a set graphicals packages for High Energy Physics application designed at CERN. 17 refs.; 1 tab
An object model for genome information at all levels of resolution
Energy Technology Data Exchange (ETDEWEB)
Honda, S.; Parrott, N.W.; Smith, R.; Lawrence, C.
1993-12-31
An object model for genome data at all levels of resolution is described. The model was derived by considering the requirements for representing genome related objects in three application domains: genome maps, large-scale DNA sequencing, and exploring functional information in gene and protein sequences. The methodology used for the object-oriented analysis is also described.
AN OBJECT ORIENTED APPROACH TOWARDS THE SPECIFICATION OF SIMULATION MODELS
Directory of Open Access Journals (Sweden)
Antonie Van Rensburg
2012-01-01
Full Text Available
ENGLISH ABSTRACT: To manage problems , is to try and cope with a flux of interacting events and ideas which unrolls through time - with the manager trying to improve situations seen as problematical, or at least as less than perfect. The ability of managing or solving these problems depends on the skills of the problem solver to analyse problems. This article introduces and discusses a proposed methodology for analysing real world problems in order to construct valid simulation models.
AFRIKAANSE OPSOMMING: Bestuurders probeer probleemsituasies bestuur, of verbeter deur 'n vloed van dinamiese interaktiewe gebeurtenisses te verstaan en te hanteer. Die sukses van die bestuur of oplos .van die probleme hang af van die kundigheid van die probleemoplosser om die probleme te kan analiseer. Die artikel bespreek 'n voorgestelde benadering tot die analise van probleme am sodoende daaruit , simulasiemodelle te kan opstel.
Graphics Processor Units (GPUs)
Wyrwas, Edward J.
2017-01-01
This presentation will include information about Graphics Processor Units (GPUs) technology, NASA Electronic Parts and Packaging (NEPP) tasks, The test setup, test parameter considerations, lessons learned, collaborations, a roadmap, NEPP partners, results to date, and future plans.
K.C. , Santosh; Wendling , Laurent
2015-01-01
International audience; The chapter focuses on one of the key issues in document image processing i.e., graphical symbol recognition. Graphical symbol recognition is a sub-field of a larger research domain: pattern recognition. The chapter covers several approaches (i.e., statistical, structural and syntactic) and specially designed symbol recognition techniques inspired by real-world industrial problems. It, in general, contains research problems, state-of-the-art methods that convey basic s...
Using the object modeling system for hydrological model development and application
Directory of Open Access Journals (Sweden)
S. Kralisch
2005-01-01
Full Text Available State of the art challenges in sustainable management of water resources have created demand for integrated, flexible and easy to use hydrological models which are able to simulate the quantitative and qualitative aspects of the hydrological cycle with a sufficient degree of certainty. Existing models which have been de-veloped to fit these needs are often constrained to specific scales or purposes and thus can not be easily adapted to meet different challenges. As a solution for flexible and modularised model development and application, the Object Modeling System (OMS has been developed in a joint approach by the USDA-ARS, GPSRU (Fort Collins, CO, USA, USGS (Denver, CO, USA, and the FSU (Jena, Germany. The OMS provides a modern modelling framework which allows the implementation of single process components to be compiled and applied as custom tailored model assemblies. This paper describes basic principles of the OMS and its main components and explains in more detail how the problems during coupling of models or model components are solved inside the system. It highlights the integration of different spatial and temporal scales by their representation as spatial modelling entities embedded into time compound components. As an exam-ple the implementation of the hydrological model J2000 is discussed.
Directory of Open Access Journals (Sweden)
Bharathi eHattiangady
2014-03-01
Full Text Available Memory and mood deficits are the enduring brain-related symptoms in Gulf War illness (GWI. Both animal model and epidemiological investigations have indicated that these impairments in a majority of GW veterans are linked to exposures to chemicals such as pyridostigmine bromide (PB, an anti nerve gas drug, permethrin (PM, an insecticide and DEET (a mosquito repellant encountered during the Persian Gulf War-1. Our previous study in a rat model has shown that combined exposures to low doses of GWI-related (GWIR chemicals PB, PM and DEET with or without 5-minutes of restraint stress (a mild stress paradigm causes hippocampus-dependent spatial memory dysfunction in a water maze test and increased depressive-like behavior in a forced swim test. In this study, using a larger cohort of rats exposed to GWIR-chemicals and stress, we investigated whether the memory deficiency identified earlier in a water maze test is reproducible with an alternative and stress free hippocampus-dependent memory test such as the object location test. We also ascertained the possible co-existence of hippocampus-independent memory dysfunction using a novel object recognition test, and alterations in mood function with additional tests for motivation and depression. Our results provide new evidence that exposure to low doses of GWIR-chemicals and stress for four weeks causes deficits in hippocampus-dependent object location memory and perirhinal cortex-dependent novel object recognition memory. An open field test performed prior to other behavioral analyses revealed that memory impairments were not associated with increased anxiety or deficits in general motor ability. However, behavioral tests for mood function such as a voluntary physical exercise paradigm and a novelty suppressed feeding test showed decreased motivation and depression. Thus, exposure to GWIR-chemicals and stress causes both hippocampus-dependent and hippocampus-independent memory impairments as well as
A hierarchical probabilistic model for rapid object categorization in natural scenes.
Directory of Open Access Journals (Sweden)
Xiaofu He
Full Text Available Humans can categorize objects in complex natural scenes within 100-150 ms. This amazing ability of rapid categorization has motivated many computational models. Most of these models require extensive training to obtain a decision boundary in a very high dimensional (e.g., ∼6,000 in a leading model feature space and often categorize objects in natural scenes by categorizing the context that co-occurs with objects when objects do not occupy large portions of the scenes. It is thus unclear how humans achieve rapid scene categorization.To address this issue, we developed a hierarchical probabilistic model for rapid object categorization in natural scenes. In this model, a natural object category is represented by a coarse hierarchical probability distribution (PD, which includes PDs of object geometry and spatial configuration of object parts. Object parts are encoded by PDs of a set of natural object structures, each of which is a concatenation of local object features. Rapid categorization is performed as statistical inference. Since the model uses a very small number (∼100 of structures for even complex object categories such as animals and cars, it requires little training and is robust in the presence of large variations within object categories and in their occurrences in natural scenes. Remarkably, we found that the model categorized animals in natural scenes and cars in street scenes with a near human-level performance. We also found that the model located animals and cars in natural scenes, thus overcoming a flaw in many other models which is to categorize objects in natural context by categorizing contextual features. These results suggest that coarse PDs of object categories based on natural object structures and statistical operations on these PDs may underlie the human ability to rapidly categorize scenes.
Some Recent Advances in Computer Graphics.
Whitted, Turner
1982-01-01
General principles of computer graphics are reviewed, including discussions of display hardware, geometric modeling, algorithms, and applications in science, computer-aided design, flight training, communications, business, art, and entertainment. (JN)
Directory of Open Access Journals (Sweden)
CRIȘAN Horea George
2015-06-01
Full Text Available The development of an equipment capable to carry out a real-time evaluation above loading of urban buses, allows the optimization of the urban transport vehicles distribution on a network segment, reducing waiting times, the passenger crowds in buses and at the same time decreasing the need of buses maintenance, issues with direct effects on the growth of the carriers economic profit. A solution based on optical detection is one that can generate results with high accuracy in relatively low cost conditions. This advantage can be obtained only if the constructive version of the equipment is properly designed, taking into account the geometric parameters of the light slots emitted and received by the sensors. Therefore, using three-dimensional CAD modelling, it was realized an optimal constructive variant. This graphical method also allows it the viewing, variation and synchronization of sensors geometrical parameters, so in this way, the equipment can produce the desired effect. Further, it has been carried out a graphical simulation of the designed equipment function, in order to validate the obtained results. Later, the designed equipment was achieved and tested under laboratory conditions, in order to be implemented and used under real conditions, on the buses of an urban public transport operator.
Graphics-based site information management at Hanford TRU burial grounds
International Nuclear Information System (INIS)
Rod, S.R.
1992-01-01
The objective of the project described in this paper is to demonstrate the use of integrated computer graphics and data base techniques in managing nuclear waste facilities. The graphics-based site information management system (SIMS) combines a three-dimensional graphic model of the facility with databases which describe the facility's components and waste inventory. The SIMS can create graphic visualizations of any site data. The SIMS described here is being used by Westinghouse Hanford Company (WHC) as part of its transuranic (TRU) waste retrieval program at the Hanford Reservation. It is being used to manage an inventory of over 38,000 containers, to validate records, and to help visualize conceptual designs of waste retrieval operations
Graphics-based site information management at Hanford TRU burial grounds
International Nuclear Information System (INIS)
Rod, S.R.
1992-04-01
The objective of the project described in this paper is to demonstrate the use of integrated computer graphics and database techniques in managing nuclear waste facilities. The graphics-based site information management system (SIMS) combines a three- dimensional graphic model of the facility with databases which describe the facility's components and waste inventory. The SIMS can create graphic visualization of any site data. The SIMS described here is being used by Westinghouse Hanford Company (WHC) as part of its transuranic (TRU) waste retrieval program at the Hanford Reservation. It is being used to manage an inventory of over 38,000 containers, to validate records, and to help visualize conceptual designs of waste retrieval operations
Directory of Open Access Journals (Sweden)
Marian Pompiliu CRISTESCU
2008-01-01
Full Text Available In databases, much work has been done towards extending models with advanced tools such as view technology, schema evolution support, multiple classification, role modeling and viewpoints. Over the past years, most of the research dealing with the object multiple representation and evolution has proposed to enrich the monolithic vision of the classical object approach in which an object belongs to one hierarchy class. In particular, the integration of the viewpoint mechanism to the conventional object-oriented data model gives it flexibility and allows one to improve the modeling power of objects. The viewpoint paradigm refers to the multiple descriptions, the distribution, and the evolution of object. Also, it can be an undeniable contribution for a distributed design of complex databases. The motivation of this paper is to define an object data model integrating viewpoints in databases and to present a federated database architecture integrating multiple viewpoint sources following a local-as-extended-view data integration approach.
Computer graphics in reactor safety analysis
International Nuclear Information System (INIS)
Fiala, C.; Kulak, R.F.
1989-01-01
This paper describes a family of three computer graphics codes designed to assist the analyst in three areas: the modelling of complex three-dimensional finite element models of reactor structures; the interpretation of computational results; and the reporting of the results of numerical simulations. The purpose and key features of each code are presented. The graphics output used in actual safety analysis are used to illustrate the capabilities of each code. 5 refs., 10 figs
Bierkens, M.F.P.; Bron, W.A.
2000-01-01
The VIDENTE program contains a decision support system (DSS) to choose between different models for stochastic modelling of water-table depths, and a graphical user interface to facilitate operating and running four implemented models: KALMAX, KALTFN,SSDS and EMERALD. In self-contained parts each of
Salas, L. A.; Veloz, S.; Ballard, G.
2011-12-01
Most forecasting approaches based on statistical models and data mining methods share a set of characteristics: all are constructed from train sets and validated against test sets using methods to avoid over-fitting on the training data; standard validation methods are used (e.g., AUC values for binary response data); some form of model averaging is applied when predicting new values from a set of competing models; measurements of error of predictions and goodness-of-fit of each competing model are reported and made spatially explicit. Many packages exist in R to fit statistical models and for data mining, but few include algorithms for forecasting and there are no model-averaging methods. However, results from these packages are commonly reported in R objects (S4 classes) that usually extend from other objects, and so they share methods in common (e.g., "predict", "aic"). Here we illustrate an approach that takes advantages of the abovementioned commonalities to develop a "framework" using objects that fit competing models with algorithms for forecasting and include model averaging methods. These objects can be easily extended to incorporate new kinds of statistical and data mining methods. We illustrate this approach with three types of objects and show how to interact with them to produce weighted averages from competing models, and some tabular and graphic outputs. These objects have been compiled into an R package ("RavianForecasting" - http://data.prbo.org/apps/ravian). We encourage others to use and contribute toward the development of these types of forecasting objects, or to develop alternatives with similar flexibility. We show how these can be easily extended to incorporate new statistical methods, new outputs, new methods to weigh averages, and new methods to validate the models.
The 3D Object Mediator : Handling 3D Models on Internet
Kok, A.J.F.; Lawick van Pabst, J. van; Afsarmanesh, H.
1997-01-01
The 3D Object MEdiator (3DOME 3) offers two services for handling 3D models: a modelshop and a renderfarm. These services can be consulted through the Internet. The modelshop meets the demands for brokerage of geometric descriptions of 3D models. People who create geometric models of objects can
Visual reconstruction of Hampi Temple - Construed Graphically, Pictorially and Digitally
Directory of Open Access Journals (Sweden)
Meera Natampally
2014-05-01
Full Text Available The existing temple complex in Hampi, Karnataka, India was extensively studied, analyzed and documented. The complex was measured-drawn and digitized by plotting its edges and vertices using AutoCAD to generate 2d drawings. The graphic 2d elements developed were extended into 3 dimensional objects using Google sketch-up. The tool has been used to facilitate the visual re-construction to achieve the architecture of the temple in its original form. 3D virtual modelling / visual reconstruction helps us to visualize the structure in its original form giving a holistic picture of the Vijayanagara Empire in all its former glory. The project is interpreted graphically using Auto-CAD drawings, pictorially, digitally using Sketch-Up model and Kinect.
Tool Support for Collaborative Teaching and Learning of Object-Oriented Modelling
DEFF Research Database (Denmark)
Hansen, Klaus Marius; Ratzer, Anne Vinter
2002-01-01
Modeling is central to doing and learning object-oriented development. We present a new tool, Ideogramic UML, for gesture-based collaborative modeling with the Unified Modeling Language (UML), which can be used to collaboratively teach and learn modeling. Furthermore, we discuss how we have effec...... effectively used Ideogramic UML to teach object-oriented modeling and the UML to groups of students using the UML for project assignments....
New ROOT Graphical User Interfaces for fitting
International Nuclear Information System (INIS)
Maline, D Gonzalez; Moneta, L; Antcheva, I
2010-01-01
ROOT, as a scientific data analysis framework, provides extensive capabilities via Graphical User Interfaces (GUI) for performing interactive analysis and visualizing data objects like histograms and graphs. A new interface for fitting has been developed for performing, exploring and comparing fits on data point sets such as histograms, multi-dimensional graphs or trees. With this new interface, users can build interactively the fit model function, set parameter values and constraints and select fit and minimization methods with their options. Functionality for visualizing the fit results is as well provided, with the possibility of drawing residuals or confidence intervals. Furthermore, the new fit panel reacts as a standalone application and it does not prevent users from interacting with other windows. We will describe in great detail the functionality of this user interface, covering as well new capabilities provided by the new fitting and minimization tools introduced recently in the ROOT framework.
Energy Technology Data Exchange (ETDEWEB)
Oliveira, Erick da S.; Junior, Alberico B. de C. [Universidade Federal de Sergipe (UFSE), Sao Cristovao, SE (Brazil)
2016-07-01
The numerical dosimetry uses virtual anthropomorphic simulators to represent the human being in computational framework and thus assess the risks associated with exposure to a radioactive source. With the development of computer animation software, the development of these simulators was facilitated using only knowledge of human anatomy to prepare various types of simulators (man, woman, child and baby) in various positions (sitting, standing, running) or part thereof (head, trunk and limbs). These simulators are constructed by loops of handling and due to the versatility of the method, one can create various geometries irradiation was not possible before. In this work, we have built an exhibition of a radiopharmaceutical scenario manipulating radioactive material using animation software and graphical modeling and anatomical database. (author)
Graphical functions in parametric space
Golz, Marcel; Panzer, Erik; Schnetz, Oliver
2017-06-01
Graphical functions are positive functions on the punctured complex plane C{\\setminus }{0,1} which arise in quantum field theory. We generalize a parametric integral representation for graphical functions due to Lam, Lebrun and Nakanishi, which implies the real analyticity of graphical functions. Moreover, we prove a formula that relates graphical functions of planar dual graphs.
Quasi-Graphic Matroids (retracted)
J. Geelen (Jim); A.M.H. Gerards (Bert); G. Whittle (Geoff)
2018-01-01
textabstractFrame matroids and lifted-graphic matroids are two interesting generalizations of graphic matroids. Here, we introduce a new generalization, quasi-graphic matroids, that unifies these two existing classes. Unlike frame matroids and lifted-graphic matroids, it is easy to certify that a
Uncertain and multi-objective programming models for crop planting structure optimization
Directory of Open Access Journals (Sweden)
Mo LI,Ping GUO,Liudong ZHANG,Chenglong ZHANG
2016-03-01
Full Text Available Crop planting structure optimization is a significant way to increase agricultural economic benefits and improve agricultural water management. The complexities of fluctuating stream conditions, varying economic profits, and uncertainties and errors in estimated modeling parameters, as well as the complexities among economic, social, natural resources and environmental aspects, have led to the necessity of developing optimization models for crop planting structure which consider uncertainty and multi-objectives elements. In this study, three single-objective programming models under uncertainty for crop planting structure optimization were developed, including an interval linear programming model, an inexact fuzzy chance-constrained programming (IFCCP model and an inexact fuzzy linear programming (IFLP model. Each of the three models takes grayness into account. Moreover, the IFCCP model considers fuzzy uncertainty of parameters/variables and stochastic characteristics of constraints, while the IFLP model takes into account the fuzzy uncertainty of both constraints and objective functions. To satisfy the sustainable development of crop planting structure planning, a fuzzy-optimization-theory-based fuzzy linear multi-objective programming model was developed, which is capable of reflecting both uncertainties and multi-objective. In addition, a multi-objective fractional programming model for crop structure optimization was also developed to quantitatively express the multi-objective in one optimization model with the numerator representing maximum economic benefits and the denominator representing minimum crop planting area allocation. These models better reflect actual situations, considering the uncertainties and multi-objectives of crop planting structure optimization systems. The five models developed were then applied to a real case study in Minqin County, north-west China. The advantages, the applicable conditions and the solution methods
Publication-quality computer graphics
Energy Technology Data Exchange (ETDEWEB)
Slabbekorn, M.H.; Johnston, R.B. Jr.
1981-01-01
A user-friendly graphic software package is being used at Oak Ridge National Laboratory to produce publication-quality computer graphics. Close interaction between the graphic designer and computer programmer have helped to create a highly flexible computer graphics system. The programmer-oriented environment of computer graphics has been modified to allow the graphic designer freedom to exercise his expertise with lines, form, typography, and color. The resultant product rivals or surpasses that work previously done by hand. This presentation of computer-generated graphs, charts, diagrams, and line drawings clearly demonstrates the latitude and versatility of the software when directed by a graphic designer.
Modeling the Object-Oriented Software Process: OPEN and the Unified Process
van den Berg, Klaas; Aksit, Mehmet; van den Broek, P.M.
A short introduction to software process modeling is presented, particularly object-oriented modeling. Two major industrial process models are discussed: the OPEN model and the Unified Process model. In more detail, the quality assurance in the Unified Process tool (formally called Objectory) is
Multi-objective optimization of the management of a waterworks using an integrated well field model
DEFF Research Database (Denmark)
Hansen, Annette Kirstine; Bauer-Gottwein, Peter; Rosbjerg, Dan
2012-01-01
This study uses multi-objective optimization of an integrated well field model to improve the management of a waterworks. The well field model, called WELLNES (WELL field Numerical Engine Shell) is a dynamic coupling of a groundwater model, a pipe network model, and a well model. WELLNES is capab...
Spatial object modeling in fuzzy topological spaces: with applications to land cover change
Tang, Xinming; Tang, Xinming
2004-01-01
The central topic of this thesis focuses on the accommodation of fuzzy spatial objects in a GIS. Several issues are discussed theoretically and practically, including the definition of fuzzy spatial objects, the topological relations between them, the modeling of fuzzy spatial objects, the
Archer, G. T.
1974-01-01
The model presents a systems analysis of a human circulatory regulation based almost entirely on experimental data and cumulative present knowledge of the many facets of the circulatory system. The model itself consists of eighteen different major systems that enter into circulatory control. These systems are grouped into sixteen distinct subprograms that are melded together to form the total model. The model develops circulatory and fluid regulation in a simultaneous manner. Thus, the effects of hormonal and autonomic control, electrolyte regulation, and excretory dynamics are all important and are all included in the model.
Garces, I. J. L.; Rosario, J. B.
2017-10-01
For an ordered subset W = {w 1, w 2, …, wk } of vertices in a connected graph G and a vertex v of G, the metric representation of v with respect to W is the k-vector r(v|W) = (d(v, w 1), d(v, w 2), …, d(v, wk )), where d(v, wi ) is the distance of the vertices v and wi in G. The set W is called a resolving set of G if r(u|W) = r(v|W) implies u = v. The metric dimension of G, denoted by β(G), is the minimum cardinality of a resolving set of G, and a resolving set of G with cardinality equal to its metric dimension is called a metric basis of G. A set T of vectors is called a positive lattice set if all the coordinates in each vector of T are positive integers. A positive lattice set T consisting of n k-vectors is called a metric graphic set if there exists a simple connected graph G of order n + k with β(G) = k such that T = {r(ui |S) : ui ∈ V (G)\\S, 1 ≤ i ≤ n} for some metric basis S = {s 1, s 2, …, sk } of G. If such G exists, then we say G is a metric graphic realization of T. In this paper, we introduce the concept of metric graphic sets anchored on the concept of metric dimension and provide some characterizations. We also give necessary and sufficient conditions for any positive lattice set consisting of 2 k-vectors to be a metric graphic set. We provide an upper bound for the sum of all the coordinates of any metric graphic set and enumerate some properties of positive lattice sets consisting of n 2-vectors that are not metric graphic sets.
Introduction to regression graphics
Cook, R Dennis
2009-01-01
Covers the use of dynamic and interactive computer graphics in linear regression analysis, focusing on analytical graphics. Features new techniques like plot rotation. The authors have composed their own regression code, using Xlisp-Stat language called R-code, which is a nearly complete system for linear regression analysis and can be utilized as the main computer program in a linear regression course. The accompanying disks, for both Macintosh and Windows computers, contain the R-code and Xlisp-Stat. An Instructor's Manual presenting detailed solutions to all the problems in the book is ava
Directory of Open Access Journals (Sweden)
Masoud Ghodrati
Full Text Available Humans can effectively and swiftly recognize objects in complex natural scenes. This outstanding ability has motivated many computational object recognition models. Most of these models try to emulate the behavior of this remarkable system. The human visual system hierarchically recognizes objects in several processing stages. Along these stages a set of features with increasing complexity is extracted by different parts of visual system. Elementary features like bars and edges are processed in earlier levels of visual pathway and as far as one goes upper in this pathway more complex features will be spotted. It is an important interrogation in the field of visual processing to see which features of an object are selected and represented by the visual cortex. To address this issue, we extended a hierarchical model, which is motivated by biology, for different object recognition tasks. In this model, a set of object parts, named patches, extracted in the intermediate stages. These object parts are used for training procedure in the model and have an important role in object recognition. These patches are selected indiscriminately from different positions of an image and this can lead to the extraction of non-discriminating patches which eventually may reduce the performance. In the proposed model we used an evolutionary algorithm approach to select a set of informative patches. Our reported results indicate that these patches are more informative than usual random patches. We demonstrate the strength of the proposed model on a range of object recognition tasks. The proposed model outperforms the original model in diverse object recognition tasks. It can be seen from the experiments that selected features are generally particular parts of target images. Our results suggest that selected features which are parts of target objects provide an efficient set for robust object recognition.
International Nuclear Information System (INIS)
Postil, S.D.; Ermolenko, A.I.; Ivanov, V.V.; Kotlyarov, V.T.
2002-01-01
A technology for creation of integrated information model of 'Ukryttia' Object premises conditions was developed on the basis of geoinformation system AutoCad. DB Access and instrumental utility 3D MAX. Information models and database for conditions of 'Ukryttia' object's premises located between 0.000 and 67.000 marks in axes 41-52, row G-T, were created. Using integrated information model of 'Ukryttia' object premises conditions, 3D surface distribution of radiation field in the object premises on level 0.000 has been received. It is revealed that maximum values of radiation field are concentrated over the clusters of fuel-containing materials
Graphical Language for Data Processing
Alphonso, Keith
2011-01-01
A graphical language for processing data allows processing elements to be connected with virtual wires that represent data flows between processing modules. The processing of complex data, such as lidar data, requires many different algorithms to be applied. The purpose of this innovation is to automate the processing of complex data, such as LIDAR, without the need for complex scripting and programming languages. The system consists of a set of user-interface components that allow the user to drag and drop various algorithmic and processing components onto a process graph. By working graphically, the user can completely visualize the process flow and create complex diagrams. This innovation supports the nesting of graphs, such that a graph can be included in another graph as a single step for processing. In addition to the user interface components, the system includes a set of .NET classes that represent the graph internally. These classes provide the internal system representation of the graphical user interface. The system includes a graph execution component that reads the internal representation of the graph (as described above) and executes that graph. The execution of the graph follows the interpreted model of execution in that each node is traversed and executed from the original internal representation. In addition, there are components that allow external code elements, such as algorithms, to be easily integrated into the system, thus making the system infinitely expandable.
Interactive graphic simulator (SGI) for EOP training
International Nuclear Information System (INIS)
Cardoso, A.; Rivero, N.
1992-01-01
SGI is a system which graphically displays the results of calculation of the models of a full-scope simulator on high-resolution color monitors. Each plant system is represented by a diagram made up of graphic symbols denoting its components and by digital indicators showing the values of the process variables. These diagrams are enhanced by a color coding indicating the status of the components and by the updating of the values displayed by the indicators
Object recognition via MINACE filter trained on synthetic 3D model
Shaulskiy, Dmitry V.; Konstantinov, Maxim V.; Starikov, Rostislav S.
2015-09-01
Paper presents study results of MINACE filter implementation to recognition problem of object subjected to out-of-plane rotation distortion and captured as raster image. Filter training conducted by images acquired from synthetic 3D object model. Dependence of recognition results from 3D model illumination type is shown.
A Model for Setting Performance Objectives for Salmonella in the Broiler Supply Chain
Tromp, S.O.; Franz, E.; Rijgersberg, H.; Asselt, van E.D.; Fels-Klerx, van der H.J.
2010-01-01
A stochastic model for setting performance objectives for Salmonella in the broiler supply chain was developed. The goal of this study was to develop a model by which performance objectives for Salmonella prevalence at various points in the production chain can be determined, based on a preset final
An Object-Oriented Model for Extensible Concurrent Systems: the Composition-Filters Approach
Bergmans, Lodewijk; Aksit, Mehmet; Wakita, K.; Wakita, Ken; Yonezawa, Akinori
1992-01-01
Applying the object-oriented paradigm for the development of large and complex software systems offers several advantages, of which increased extensibility and reusability are the most prominent ones. The object-oriented model is also quite suitable for modeling concurrent systems. However, it
Harris, Laura Florence; Awoonor-Williams, John Koku; Gerdts, Caitlin; Gil Urbano, Laura; González Vélez, Ana Cristina; Halpern, Jodi; Prata, Ndola; Baffoe, Peter
2016-01-01
Conscientious objection to abortion, clinicians' refusal to perform legal abortions because of their religious or moral beliefs, has been the subject of increasing debate among bioethicists, policymakers, and public health advocates in recent years. Conscientious objection policies are intended to balance reproductive rights and clinicians' beliefs. However, in practice, clinician objection can act as a barrier to abortion access-impinging on reproductive rights, and increasing unsafe abortion and related morbidity and mortality. There is little information about conscientious objection from a medical or public health perspective. A quantitative instrument is needed to assess prevalence of conscientious objection and to provide insight on its practice. This paper describes the development of a survey instrument to measure conscientious objection to abortion provision. A literature review, and in-depth formative interviews with stakeholders in Colombia were used to develop a conceptual model of conscientious objection. This model led to the development of a survey, which was piloted, and then administered, in Ghana. The model posits three domains of conscientious objection that form the basis for the survey instrument: 1) beliefs about abortion and conscientious objection; 2) actions related to conscientious objection and abortion; and 3) self-identification as a conscientious objector. The instrument is intended to be used to assess prevalence among clinicians trained to provide abortions, and to gain insight on how conscientious objection is practiced in a variety of settings. Its results can inform more effective and appropriate strategies to regulate conscientious objection.
Directory of Open Access Journals (Sweden)
Laura Florence Harris
Full Text Available Conscientious objection to abortion, clinicians' refusal to perform legal abortions because of their religious or moral beliefs, has been the subject of increasing debate among bioethicists, policymakers, and public health advocates in recent years. Conscientious objection policies are intended to balance reproductive rights and clinicians' beliefs. However, in practice, clinician objection can act as a barrier to abortion access-impinging on reproductive rights, and increasing unsafe abortion and related morbidity and mortality. There is little information about conscientious objection from a medical or public health perspective. A quantitative instrument is needed to assess prevalence of conscientious objection and to provide insight on its practice. This paper describes the development of a survey instrument to measure conscientious objection to abortion provision.A literature review, and in-depth formative interviews with stakeholders in Colombia were used to develop a conceptual model of conscientious objection. This model led to the development of a survey, which was piloted, and then administered, in Ghana.The model posits three domains of conscientious objection that form the basis for the survey instrument: 1 beliefs about abortion and conscientious objection; 2 actions related to conscientious objection and abortion; and 3 self-identification as a conscientious objector.The instrument is intended to be used to assess prevalence among clinicians trained to provide abortions, and to gain insight on how conscientious objection is practiced in a variety of settings. Its results can inform more effective and appropriate strategies to regulate conscientious objection.
Kuchinke, W.; Ohmann, C.; Verheij, R.A.; Veen, E.B. van; Arvanitis, T.N.; Taweel, A.; Delaney, B.C.
2014-01-01
Purpose: To develop a model describing core concepts and principles of data flow, data privacy and confidentiality, in a simple and flexible way, using concise process descriptions and a diagrammatic notation applied to research workflow processes. The model should help to generate robust data
Mansor, Zakwan; Zakaria, Mohd Zakimi; Nor, Azuwir Mohd; Saad, Mohd Sazli; Ahmad, Robiah; Jamaluddin, Hishamuddin
2017-09-01
This paper presents the black-box modelling of palm oil biodiesel engine (POB) using multi-objective optimization differential evolution (MOODE) algorithm. Two objective functions are considered in the algorithm for optimization; minimizing the number of term of a model structure and minimizing the mean square error between actual and predicted outputs. The mathematical model used in this study to represent the POB system is nonlinear auto-regressive moving average with exogenous input (NARMAX) model. Finally, model validity tests are applied in order to validate the possible models that was obtained from MOODE algorithm and lead to select an optimal model.
International Nuclear Information System (INIS)
Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V.; Tkachenko, N. P.
2015-01-01
The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available
Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V.; Tkachenko, N. P.
2015-12-01
The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.
A comparison of Image Quality Models and Metrics Predicting Object Detection
Rohaly, Ann Marie; Ahumada, Albert J., Jr.; Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)
1995-01-01
Many models and metrics for image quality predict image discriminability, the visibility of the difference between a pair of images. Some image quality applications, such as the quality of imaging radar displays, are concerned with object detection and recognition. Object detection involves looking for one of a large set of object sub-images in a large set of background images and has been approached from this general point of view. We find that discrimination models and metrics can predict the relative detectability of objects in different images, suggesting that these simpler models may be useful in some object detection and recognition applications. Here we compare three alternative measures of image discrimination, a multiple frequency channel model, a single filter model, and RMS error.
Multi-objective radiomics model for predicting distant failure in lung SBRT
Zhou, Zhiguo; Folkert, Michael; Iyengar, Puneeth; Westover, Kenneth; Zhang, Yuanyuan; Choy, Hak; Timmerman, Robert; Jiang, Steve; Wang, Jing
2017-06-01
Stereotactic body radiation therapy (SBRT) has demonstrated high local control rates in early stage non-small cell lung cancer patients who are not ideal surgical candidates. However, distant failure after SBRT is still common. For patients at high risk of early distant failure after SBRT treatment, additional systemic therapy may reduce the risk of distant relapse and improve overall survival. Therefore, a strategy that can correctly stratify patients at high risk of failure is needed. The field of radiomics holds great potential in predicting treatment outcomes by using high-throughput extraction of quantitative imaging features. The construction of predictive models in radiomics is typically based on a single objective such as overall accuracy or the area under the curve (AUC). However, because of imbalanced positive and negative events in the training datasets, a single objective may not be ideal to guide model construction. To overcome these limitations, we propose a multi-objective radiomics model that simultaneously considers sensitivity and specificity as objective functions. To design a more accurate and reliable model, an iterative multi-objective immune algorithm (IMIA) was proposed to optimize these objective functions. The multi-objective radiomics model is more sensitive than the single-objective model, while maintaining the same levels of specificity and AUC. The IMIA performs better than the traditional immune-inspired multi-objective algorithm.
Cleaver, Samantha
2008-01-01
Not so many years ago, comic books in school were considered the enemy. Students caught sneaking comics between the pages of bulky--and less engaging--textbooks were likely sent to the principal. Today, however, comics, including classics such as "Superman" but also their generally more complex, nuanced cousins, graphic novels, are not only…
Blanchard, D. C.
1986-01-01
Printer Graphics Package (PGP) is tool for making two-dimensional symbolic plots on line printer. PGP created to support development of Heads-Up Display (HUD) simulation. Standard symbols defined with HUD in mind. Available symbols include circle, triangle, quadrangle, window, line, numbers, and text. Additional symbols easily added or built up from available symbols.
Mathematical Graphic Organizers
Zollman, Alan
2009-01-01
As part of a math-science partnership, a university mathematics educator and ten elementary school teachers developed a novel approach to mathematical problem solving derived from research on reading and writing pedagogy. Specifically, research indicates that students who use graphic organizers to arrange their ideas improve their comprehension…
Free, cross-platform gRaphical software
DEFF Research Database (Denmark)
Dethlefsen, Claus
2006-01-01
-recursive graphical models, and models defined using the BUGS language. Today, there exists a wide range of packages to support the analysis of data using graphical models. Here, we focus on Open Source software, making it possible to extend the functionality by integrating these packages into more general tools. We...... will attempt to give an overview of the available Open Source software, with focus on the gR project. This project was launched in 2002 to make facilities in R for graphical modelling. Several R packages have been developed within the gR project both for display and analysis of graphical models...
Putnam, William M.
2011-01-01
Earth system models like the Goddard Earth Observing System model (GEOS-5) have been pushing the limits of large clusters of multi-core microprocessors, producing breath-taking fidelity in resolving cloud systems at a global scale. GPU computing presents an opportunity for improving the efficiency of these leading edge models. A GPU implementation of GEOS-5 will facilitate the use of cloud-system resolving resolutions in data assimilation and weather prediction, at resolutions near 3.5 km, improving our ability to extract detailed information from high-resolution satellite observations and ultimately produce better weather and climate predictions
Directory of Open Access Journals (Sweden)
Mark N Read
2016-09-01
Full Text Available The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto
Unsupervised Object Modeling and Segmentation with Symmetry Detection for Human Activity Recognition
Directory of Open Access Journals (Sweden)
Jui-Yuan Su
2015-04-01
Full Text Available In this paper we present a novel unsupervised approach to detecting and segmenting objects as well as their constituent symmetric parts in an image. Traditional unsupervised image segmentation is limited by two obvious deficiencies: the object detection accuracy degrades with the misaligned boundaries between the segmented regions and the target, and pre-learned models are required to group regions into meaningful objects. To tackle these difficulties, the proposed approach aims at incorporating the pair-wise detection of symmetric patches to achieve the goal of segmenting images into symmetric parts. The skeletons of these symmetric parts then provide estimates of the bounding boxes to locate the target objects. Finally, for each detected object, the graphcut-based segmentation algorithm is applied to find its contour. The proposed approach has significant advantages: no a priori object models are used, and multiple objects are detected. To verify the effectiveness of the approach based on the cues that a face part contains an oval shape and skin colors, human objects are extracted from among the detected objects. The detected human objects and their parts are finally tracked across video frames to capture the object part movements for learning the human activity models from video clips. Experimental results show that the proposed method gives good performance on publicly available datasets.
Menegaldo, Luciano Luporini; de Oliveira, Liliam Fernandes; Minato, Kin K
2014-04-04
This paper describes the "EMG Driven Force Estimator (EMGD-FE)", a Matlab® graphical user interface (GUI) application that estimates skeletal muscle forces from electromyography (EMG) signals. Muscle forces are obtained by numerically integrating a system of ordinary differential equations (ODEs) that simulates Hill-type muscle dynamics and that utilises EMG signals as input. In the current version, the GUI can estimate the forces of lower limb muscles executing isometric contractions. Muscles from other parts of the body can be tested as well, although no default values for model parameters are provided. To achieve accurate evaluations, EMG collection is performed simultaneously with torque measurement from a dynamometer. The computer application guides the user, step-by-step, to pre-process the raw EMG signals, create inputs for the muscle model, numerically integrate the ODEs and analyse the results. An example of the application's functions is presented using the quadriceps femoris muscle. Individual muscle force estimations for the four components as well the knee isometric torque are shown. The proposed GUI can estimate individual muscle forces from EMG signals of skeletal muscles. The estimation accuracy depends on several factors, including signal collection and modelling hypothesis issues.
3D object-oriented image analysis in 3D geophysical modelling
DEFF Research Database (Denmark)
Fadel, I.; van der Meijde, M.; Kerle, N.
2015-01-01
Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects......) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D...... interactive modelling environment IGMAS+, and their density contrast values were calculated using an object-based inversion technique to calculate the forward signal of the objects and compare it with the measured satellite gravity. Thus, a new object-based approach was implemented to interpret and extract...
Feedforward Object-Vision Models Only Tolerate Small Image Variations Compared to Human
Directory of Open Access Journals (Sweden)
Masoud eGhodrati
2014-07-01
Full Text Available Invariant object recognition is a remarkable ability of primates' visual system that its underlying mechanism has constantly been under intense investigations. Computational modelling is a valuable tool toward understanding the processes involved in invariant object recognition. Although recent computational models have shown outstanding performances on challenging image databases, they fail to perform well when images with more complex variations of the same object are applied to them. Studies have shown that making sparse representation of objects by extracting more informative visual features through a feedforward sweep can lead to higher recognition performances. Here, however, we show that when the complexity of image variations is high, even this approach results in poor performance compared to humans. To assess the performance of models and humans in invariant object recognition tasks, we built a parametrically controlled image database consisting of several object categories varied in different dimensions and levels, rendered from 3D planes. Comparing the performance of several object recognition models with human observers shows that only in low-level image variations the models perform similar to humans in categorization tasks. Furthermore, the results of our behavioral experiments demonstrate that, even under difficult experimental conditions (i.e. briefly presented masked stimuli with complex image variations, human observers performed outstandingly well, suggesting that the models are still far from resembling humans in invariant object recognition. Taken together, we suggest that learning sparse informative visual features, although desirable, is not a complete solution for future progresses in object-vision modelling. We show that this approach is not of significant help in solving the computational crux of object recognition (that is invariant object recognition when the identity-preserving image variations become more complex.
Graphics gems V (Macintosh version)
Paeth, Alan W
1995-01-01
Graphics Gems V is the newest volume in The Graphics Gems Series. It is intended to provide the graphics community with a set of practical tools for implementing new ideas and techniques, and to offer working solutions to real programming problems. These tools are written by a wide variety of graphics programmers from industry, academia, and research. The books in the series have become essential, time-saving tools for many programmers.Latest collection of graphics tips in The Graphics Gems Series written by the leading programmers in the field.Contains over 50 new gems displaying some of t
K. Schwarz
2004-01-01
textabstractThis thesis investigates a possible solution to adapting an automatically generated presentation to an anonymous user. We will explore the field of User Modeling, specifically Adaptive Hypermedia, to find suitable methods. In our case study, we combine the methods we find to develop a
Topographic Digital Raster Graphics - USGS DIGITAL RASTER GRAPHICS
NSGIC Local Govt | GIS Inventory — USGS Topographic Digital Raster Graphics downloaded from LABINS (http://data.labins.org/2003/MappingData/drg/drg_stpl83.cfm). A digital raster graphic (DRG) is a...
An object-based storage model for distributed remote sensing images
Yu, Zhanwu; Li, Zhongmin; Zheng, Sheng
2006-10-01
It is very difficult to design an integrated storage solution for distributed remote sensing images to offer high performance network storage services and secure data sharing across platforms using current network storage models such as direct attached storage, network attached storage and storage area network. Object-based storage, as new generation network storage technology emerged recently, separates the data path, the control path and the management path, which solves the bottleneck problem of metadata existed in traditional storage models, and has the characteristics of parallel data access, data sharing across platforms, intelligence of storage devices and security of data access. We use the object-based storage in the storage management of remote sensing images to construct an object-based storage model for distributed remote sensing images. In the storage model, remote sensing images are organized as remote sensing objects stored in the object-based storage devices. According to the storage model, we present the architecture of a distributed remote sensing images application system based on object-based storage, and give some test results about the write performance comparison of traditional network storage model and object-based storage model.
DEFF Research Database (Denmark)
Garcia Clavero, Ana Belén; Madsen, A.; Vigre, Håkan
2012-01-01
Human campylobacteriosis represents an important economic and public health problem. Campylobacter originating from feces of infected chickens will contaminate chicken meat posing a risk to the consumer. Vaccination against Campylobacter in broilers is one probable measure to reduce consumers......’ exposure to Campylobacter.In this presentation we focus on the development of a computerized decision support system to aid management decisions on Campylobacter vaccination of commercial broilers. Broilers should be vaccinated against Campylobacter in the first 2 weeks of age. Therefore, the decision......, epidemiological and economic factors (cost-reward functions) have been included in the models. The final outcome of the models is presented in probabilities of expected level of Campylobacter and financial terms influenced by the decision on vaccination. For example, if the best decision seems to be to vaccinate...
Object-oriented modeling and simulation of the closed loop cardiovascular system by using SIMSCAPE.
de Canete, J Fernandez; del Saz-Orozco, P; Moreno-Boza, D; Duran-Venegas, E
2013-05-01
The modeling of physiological systems via mathematical equations reflects the calculation procedure more than the structure of the real system modeled, with the simulation environment SIMULINK™ being one of the best suited to this strategy. Nevertheless, object-oriented modeling is spreading in current simulation environments through the use of the individual components of the model and its interconnections to define the underlying dynamic equations. In this paper we describe the use of the SIMSCAPE™ simulation environment in the object-oriented modeling of the closed loop cardiovascular system. The described approach represents a valuable tool in the teaching of physiology for graduate medical students. Copyright © 2013 Elsevier Ltd. All rights reserved.
Application of fuzzy goal programming approach to multi-objective linear fractional inventory model
Dutta, D.; Kumar, Pavan
2015-09-01
In this paper, we propose a model and solution approach for a multi-item inventory problem without shortages. The proposed model is formulated as a fractional multi-objective optimisation problem along with three constraints: budget constraint, space constraint and budgetary constraint on ordering cost of each item. The proposed inventory model becomes a multiple criteria decision-making (MCDM) problem in fuzzy environment. This model is solved by multi-objective fuzzy goal programming (MOFGP) approach. A numerical example is given to illustrate the proposed model.
Interactive Graphic Journalism
Directory of Open Access Journals (Sweden)
Laura Schlichting
2016-12-01
Full Text Available This paper examines graphic journalism (GJ in a transmedial context, and argues that transmedial graphic journalism (TMGJ is an important and fruitful new form of visual storytelling, that will re-invigorate the field of journalism, as it steadily tests out and plays with new media, ultimately leading to new challenges in both the production and reception process. With TMGJ, linear narratives may be broken up and ethical issues concerning the emotional and entertainment value are raised when it comes to ‘playing the news’. The aesthetic characteristics of TMGJ will be described and interactivity’s influence on non-fiction storytelling will be explored in an analysis of The Nisoor Square Shooting (2011 and Ferguson Firsthand (2015.
Definition of an Object-Oriented Modeling Language for Enterprise Architecture
Lê, Lam Son; Wegmann, Alain
2005-01-01
In enterprise architecture, the goal is to integrate business resources and IT resources in order to improve an enterprises competitiveness. In an enterprise architecture project, the development team usually constructs a model that represents the enterprise: the enterprise model. In this paper, we present a modeling language for building such enterprise models. Our enterprise models are hierarchical object-oriented representations of the enterprises. This paper presents the foundations of o...
Schmidt, J.; Piret, C.; Zhang, N.; Kadlec, B. J.; Liu, Y.; Yuen, D. A.; Wright, G. B.; Sevre, E. O.
2008-12-01
The faster growth curves in the speed of GPUs relative to CPUs in recent years and its rapidly gained popularity has spawned a new area of development in computational technology. There is much potential in utilizing GPUs for solving evolutionary partial differential equations and producing the attendant visualization. We are concerned with modeling tsunami waves, where computational time is of extreme essence, for broadcasting warnings. In order to test the efficacy of the GPU on the set of shallow-water equations, we employed the NVIDIA board 8600M GT on a MacBook Pro. We have compared the relative speeds between the CPU and the GPU on a single processor for two types of spatial discretization based on second-order finite-differences and radial basis functions. RBFs are a more novel method based on a gridless and a multi- scale, adaptive framework. Using the NVIDIA 8600M GT, we received a speed up factor of 8 in favor of GPU for the finite-difference method and a factor of 7 for the RBF scheme. We have also studied the atmospheric dynamics problem of swirling flows over a spherical surface and found a speed-up of 5.3 using the GPU. The time steps employed for the RBF method are larger than those used in finite-differences, because of the much fewer number of nodal points needed by RBF. Thus, in modeling the same physical time, RBF acting in concert with GPU would be the fastest way to go.
Kim, Ann
2009-01-01
It's no secret that children and YAs are clued in to graphic novels (GNs) and that comics-loving adults are positively giddy that this format is getting the recognition it deserves. Still, there is a whole swath of library card-carrying grown-up readers out there with no idea where to start. Splashy movies such as "300" and "Spider-Man" and their…
An object-oriented forest landscape model and its representation of tree species
Hong S. He; David J. Mladenoff; Joel Boeder
1999-01-01
LANDIS is a forest landscape model that simulates the interaction of large landscape processes and forest successional dynamics at tree species level. We discuss how object-oriented design (OOD) approaches such as modularity, abstraction and encapsulation are integrated into the design of LANDIS. We show that using OOD approaches, model decisions (olden as model...
Using Graphics in Software Documentation.
Wise, Mary R.
1993-01-01
Examines two ways that graphics and graphic techniques can help communicate technical topics visually: by helping readers navigate through a manual; and by helping readers better understand the material in the manual. (SR)
Career Opportunities in Computer Graphics.
Langer, Victor
1983-01-01
Reviews the impact of computer graphics on industrial productivity. Details the computer graphics technician curriculum at Milwaukee Area Technical College and the cooperative efforts of business and industry to fund and equip the program. (SK)
A multi-objective programming model for assessment the GHG emissions in MSW management
International Nuclear Information System (INIS)
Mavrotas, George; Skoulaxinou, Sotiria; Gakis, Nikos; Katsouros, Vassilis; Georgopoulou, Elena
2013-01-01
Highlights: • The multi-objective multi-period optimization model. • The solution approach for the generation of the Pareto front with mathematical programming. • The very detailed description of the model (decision variables, parameters, equations). • The use of IPCC 2006 guidelines for landfill emissions (first order decay model) in the mathematical programming formulation. - Abstract: In this study a multi-objective mathematical programming model is developed for taking into account GHG emissions for Municipal Solid Waste (MSW) management. Mathematical programming models are often used for structure, design and operational optimization of various systems (energy, supply chain, processes, etc.). The last twenty years they are used all the more often in Municipal Solid Waste (MSW) management in order to provide optimal solutions with the cost objective being the usual driver of the optimization. In our work we consider the GHG emissions as an additional criterion, aiming at a multi-objective approach. The Pareto front (Cost vs. GHG emissions) of the system is generated using an appropriate multi-objective method. This information is essential to the decision maker because he can explore the trade-offs in the Pareto curve and select his most preferred among the Pareto optimal solutions. In the present work a detailed multi-objective, multi-period mathematical programming model is developed in order to describe the waste management problem. Apart from the bi-objective approach, the major innovations of the model are (1) the detailed modeling considering 34 materials and 42 technologies, (2) the detailed calculation of the energy content of the various streams based on the detailed material balances, and (3) the incorporation of the IPCC guidelines for the CH 4 generated in the landfills (first order decay model). The equations of the model are described in full detail. Finally, the whole approach is illustrated with a case study referring to the application