Zimovets, Artem; Matviychuk, Alexander; Ushakov, Vladimir
2016-12-01
The paper presents two different approaches to reduce the time of computer calculation of reachability sets. First of these two approaches use different data structures for storing the reachability sets in the computer memory for calculation in single-threaded mode. Second approach is based on using parallel algorithms with reference to the data structures from the first approach. Within the framework of this paper parallel algorithm of approximate reachability set calculation on computer with SMP-architecture is proposed. The results of numerical modelling are presented in the form of tables which demonstrate high efficiency of parallel computing technology and also show how computing time depends on the used data structure.
On some questions in computer modeling of the reachability sets constructing problems
Ushakov, V. N.; Parshikov, G. V.; Matviychuk, A. R.
2016-10-01
The research considers the problem of constructing the reachability sets of non-linear dynamical system in n-dimensional Euclidean space on the fixed time interval. The approximate solution methods of the reachability sets constructing are considered in this research as well as the accuracy estimation for this methods is given. The research contains the computational experiments on computer modeling of described reachability sets constructing methods, which use the algorithms implemented for two computation technologies CPU as well as GPU (using CUDA technology). In this research the description and comparison of approaches to the computer modeling of the problem are given. Furthermore, the CPU-based computer modeling result comparison with the result obtained on GPU based on CUDA technology are presented. Besides, this research discusses some the side issues appeared during computer modeling, the issues raised during the computer algorithms implementation, as well as the ways to eliminate these issues or reduce their impact.
A market model: uncertainty and reachable sets
Raczynski Stanislaw
2015-01-01
Full Text Available Uncertain parameters are always present in models that include human factor. In marketing the uncertain consumer behavior makes it difficult to predict the future events and elaborate good marketing strategies. Sometimes uncertainty is being modeled using stochastic variables. Our approach is quite different. The dynamic market with uncertain parameters is treated using differential inclusions, which permits to determine the corresponding reachable sets. This is not a statistical analysis. We are looking for solutions to the differential inclusions. The purpose of the research is to find the way to obtain and visualise the reachable sets, in order to know the limits for the important marketing variables. The modeling method consists in defining the differential inclusion and find its solution, using the differential inclusion solver developed by the author. As the result we obtain images of the reachable sets where the main control parameter is the share of investment, being a part of the revenue. As an additional result we also can define the optimal investment strategy. The conclusion is that the differential inclusion solver can be a useful tool in market model analysis.
Approximation of Reachable Sets using Optimal Control Algorithms
Baier, Robert; Gerdts, Matthias; Xausa, Ilaria
2013-01-01
To appear; International audience; Numerical experiences with a method for the approximation of reachable sets of nonlinear control systems are reported. The method is based on the formulation of suitable optimal control problems with varying objective functions, whose discretization by Euler's method lead to finite dimensional non-convex nonlinear programs. These are solved by a sequential quadratic programming method. An efficient adjoint method for gradient computation is used to reduce th...
Internal ellipsoidal estimates of reachable set of impulsive control systems
Matviychuk, Oksana G. [Institute of Mathematics and Mechanics, Russian Academy of Sciences, 16 S. Kovalevskaya str., Ekaterinburg, 620990, Russia and Ural Federal University, 19 Mira str., Ekaterinburg, 620002 (Russian Federation)
2014-11-18
A problem of estimating reachable sets of linear impulsive control system with uncertainty in initial data is considered. The impulsive controls in the dynamical system belong to the intersection of a special cone with a generalized ellipsoid both taken in the space of functions of bounded variation. Assume that an ellipsoidal state constraints are imposed. The algorithms for constructing internal ellipsoidal estimates of reachable sets for such control systems and numerical simulation results are given.
Reachable set modeling and engagement analysis of exoatmospheric interceptor
Chai Hua; Liang Yangang; Chen Lei; Tang Guojin
2014-01-01
A novel reachable set (RS) model is developed within a framework of exoatmospheric interceptor engagement analysis. The boost phase steering scheme and trajectory distortion mech-anism of the interceptor are firstly explored. A mathematical model of the distorted RS is then for-mulated through a dimension–reduction analysis. By treating the outer boundary of the RS on sphere surface as a spherical convex hull, two relevant theorems are proposed and the RS envelope is depicted by the computational geometry theory. Based on RS model, the algorithms of intercept window analysis and launch parameters determination are proposed, and numerical simulations are carried out for interceptors with different energy or launch points. Results show that the proposed method can avoid intensive on-line computation and provide an accurate and effective approach for interceptor engagement analysis. The suggested RS model also serves as a ready reference to other related problems such as interceptor effectiveness evaluation and platform disposition.
Reachable set modeling and engagement analysis of exoatmospheric interceptor
Chai Hua
2014-12-01
Full Text Available A novel reachable set (RS model is developed within a framework of exoatmospheric interceptor engagement analysis. The boost phase steering scheme and trajectory distortion mechanism of the interceptor are firstly explored. A mathematical model of the distorted RS is then formulated through a dimension–reduction analysis. By treating the outer boundary of the RS on sphere surface as a spherical convex hull, two relevant theorems are proposed and the RS envelope is depicted by the computational geometry theory. Based on RS model, the algorithms of intercept window analysis and launch parameters determination are proposed, and numerical simulations are carried out for interceptors with different energy or launch points. Results show that the proposed method can avoid intensive on-line computation and provide an accurate and effective approach for interceptor engagement analysis. The suggested RS model also serves as a ready reference to other related problems such as interceptor effectiveness evaluation and platform disposition.
Theory of Regions for Control Synthesis without Computing Reachability Graph
Sadok Rezig
2017-03-01
Full Text Available This paper addresses the design of Petri net (PN supervisor using the theory of regions for forbidden state problem with a set of general mutual exclusion constraints. In fact, as any method of supervisory control based on reachability graph, the theory of regions suffers from a technical obstacle in control synthesis, which is the necessity of computing the graph at each iteration step. Moreover, based on the reachability graph, which may contain a large number of states, with respect to the structural size of the system, the computation of PN controllers becomes harder and even impossible. The main contribution of this paper, compared to previous works, is the development of a control synthesis method in order to decrease significantly the computation cost of the PN supervisor. Thus, based on PN properties and mathematical concepts, the proposed methodology provides an optimal PN supervisor for bounded Petri nets following the interpretation of the theory of regions. Finally, case studies are solved by CPLEX software to compare our new control policy with previous works which use the theory of regions for control synthesis.
Li, Jun; Lu, Dawei; Luo, Zhihuang; Laflamme, Raymond; Peng, Xinhua; Du, Jiangfeng
2016-07-01
Precisely characterizing and controlling realistic quantum systems under noises is a challenging frontier in quantum sciences and technologies. In developing reliable controls for open quantum systems, one is often confronted with the problem of the lack of knowledge on the system controllability. The purpose of this paper is to give a numerical approach to this problem, that is, to approximately compute the reachable set of states for coherently controlled quantum Markovian systems. The approximation consists of setting both upper and lower bounds for system's reachable region of states. Furthermore, we apply our reachability analysis to the control of the relaxation dynamics of a two-qubit nuclear magnetic resonance spin system. We implement some experimental tasks of quantum state engineering in this open system at a near optimal performance in view of purity: e.g., increasing polarization and preparing pseudopure states. These results demonstrate the usefulness of our theory and show interesting and promising applications of environment-assisted quantum dynamics.
Computations with reachable elements in simple Lie algebras
de Graaf, Willem
2010-01-01
We report on some computations with reachable elements in simple Lie algebras of exceptional type within the SLA package of GAP4. These computations confirm the classification of such elements by Elashvili and Grelaud. Secondly they answer a question from Panyushev. Thirdly they show in what way a recent result of Yakimova for the Lie algebras of classical type extends to the exceptional types.
Computing and Visualizing Reachable Volumes for Maneuvering Satellites
2011-09-01
Computing and Visualizing Reachable Volumes for Maneuvering Satellites Ming Jiang, Willem H. de Vries, Alexander J. Pertica , Scot S. Olivier...Handbook. Elsevier, 2004. 6. M. Jiang, M. Andereck, A. J. Pertica , and S. S. Olivier. A Scalable Visualization System for Improving Space Situational...Jiang, J. Leek, J. L. Levatin, S. Nikolaev, A. J. Pertica , D. W. Phillion, H. K. Springer, and W. H. de Vries. High-Performance Computer Modeling of
Reachability computation for hybrid systems with Ariadne
L. Benvenuti; D. Bresolin; A. Casagrande; P.J. Collins (Pieter); A. Ferrari; E. Mazzi; T. Villa; A. Sangiovanni-Vincentelli
2008-01-01
htmlabstractAriadne is an in-progress open environment to design algorithms for computing with hybrid automata, that relies on a rigorous computable analysis theory to represent geometric objects, in order to achieve provable approximation bounds along the computations. In this paper we discuss the
Computing and Visualizing Reachable Volumes for Maneuvering Satellites
Jiang, M; de Vries, W H; Pertica, A J; Olivier, S S
2011-09-11
Detecting and predicting maneuvering satellites is an important problem for Space Situational Awareness. The spatial envelope of all possible locations within reach of such a maneuvering satellite is known as the Reachable Volume (RV). As soon as custody of a satellite is lost, calculating the RV and its subsequent time evolution is a critical component in the rapid recovery of the satellite. In this paper, we present a Monte Carlo approach to computing the RV for a given object. Essentially, our approach samples all possible trajectories by randomizing thrust-vectors, thrust magnitudes and time of burn. At any given instance, the distribution of the 'point-cloud' of the virtual particles defines the RV. For short orbital time-scales, the temporal evolution of the point-cloud can result in complex, multi-reentrant manifolds. Visualization plays an important role in gaining insight and understanding into this complex and evolving manifold. In the second part of this paper, we focus on how to effectively visualize the large number of virtual trajectories and the computed RV. We present a real-time out-of-core rendering technique for visualizing the large number of virtual trajectories. We also examine different techniques for visualizing the computed volume of probability density distribution, including volume slicing, convex hull and isosurfacing. We compare and contrast these techniques in terms of computational cost and visualization effectiveness, and describe the main implementation issues encountered during our development process. Finally, we will present some of the results from our end-to-end system for computing and visualizing RVs using examples of maneuvering satellites.
Elise Cormie-Bowins
2012-10-01
Full Text Available We consider the problem of computing reachability probabilities: given a Markov chain, an initial state of the Markov chain, and a set of goal states of the Markov chain, what is the probability of reaching any of the goal states from the initial state? This problem can be reduced to solving a linear equation Ax = b for x, where A is a matrix and b is a vector. We consider two iterative methods to solve the linear equation: the Jacobi method and the biconjugate gradient stabilized (BiCGStab method. For both methods, a sequential and a parallel version have been implemented. The parallel versions have been implemented on the compute unified device architecture (CUDA so that they can be run on a NVIDIA graphics processing unit (GPU. From our experiments we conclude that as the size of the matrix increases, the CUDA implementations outperform the sequential implementations. Furthermore, the BiCGStab method performs better than the Jacobi method for dense matrices, whereas the Jacobi method does better for sparse ones. Since the reachability probabilities problem plays a key role in probabilistic model checking, we also compared the implementations for matrices obtained from a probabilistic model checker. Our experiments support the conjecture by Bosnacki et al. that the Jacobi method is superior to Krylov subspace methods, a class to which the BiCGStab method belongs, for probabilistic model checking.
An Algorithm to Construct Concurrent Reachability Graph of Petri Nets
张金泉; 倪丽娜; 蒋昌俊
2004-01-01
Reachability graph is a very important tool to analyze the dynamic properties of Petri nets, but the concurrent relation of transitions in Petri nets cannot be represented by reachability graph. Petri net is a concurrent system, while reachability graph is a serial one. However, concurrency is a kind of property which is not only very significant but also difficult to be analyzed and controlled. This paper presents the concepts of concurrent reachable marking and concurrent reachable graph in order to represent and analyze the concurrent system. The algorithm constructing concurrent reachable marking set and concurrent reachability graph is also shown so that we can study the response problems among services in a network computing environment and analyze the throughput of the system. The Dining Philosophers Problem, which is a classic problem of describing the management of concurrent resources, is given as an example to illustrate the significance of concurrent reachability graph.
Hybrid system for computing reachable workspaces for redundant manipulators
Alameldin, Tarek K.; Sobh, Tarek M.
1991-03-01
An efficient computation of 3D workspaces for redundant manipulators is based on a " hybrid" a!- gorithm between direct kinematics and screw theory. Direct kinematics enjoys low computational cost but needs edge detection algorithms when workspace boundaries are needed. Screw theory has exponential computational cost per workspace point but does not need edge detection. Screw theory allows computing workspace points in prespecified directions while direct kinematics does not. Applications of the algorithm are discussed.
Sampling-based motion planning with reachable volumes: Theoretical foundations
McMahon, Troy
2014-05-01
© 2014 IEEE. We introduce a new concept, reachable volumes, that denotes the set of points that the end effector of a chain or linkage can reach. We show that the reachable volume of a chain is equivalent to the Minkowski sum of the reachable volumes of its links, and give an efficient method for computing reachable volumes. We present a method for generating configurations using reachable volumes that is applicable to various types of robots including open and closed chain robots, tree-like robots, and complex robots including both loops and branches. We also describe how to apply constraints (both on end effectors and internal joints) using reachable volumes. Unlike previous methods, reachable volumes work for spherical and prismatic joints as well as planar joints. Visualizations of reachable volumes can allow an operator to see what positions the robot can reach and can guide robot design. We present visualizations of reachable volumes for representative robots including closed chains and graspers as well as for examples with joint and end effector constraints.
Arnaud Gotlieb
2013-02-01
Full Text Available Iterative imperative programs can be considered as infinite-state systems computing over possibly unbounded domains. Studying reachability in these systems is challenging as it requires to deal with an infinite number of states with standard backward or forward exploration strategies. An approach that we call Constraint-based reachability, is proposed to address reachability problems by exploring program states using a constraint model of the whole program. The keypoint of the approach is to interpret imperative constructions such as conditionals, loops, array and memory manipulations with the fundamental notion of constraint over a computational domain. By combining constraint filtering and abstraction techniques, Constraint-based reachability is able to solve reachability problems which are usually outside the scope of backward or forward exploration strategies. This paper proposes an interpretation of classical filtering consistencies used in Constraint Programming as abstract domain computations, and shows how this approach can be used to produce a constraint solver that efficiently generates solutions for reachability problems that are unsolvable by other approaches.
Planning with Reachable Distances
Tang, Xinyu
2009-01-01
Motion planning for spatially constrained robots is difficult due to additional constraints placed on the robot, such as closure constraints for closed chains or requirements on end effector placement for articulated linkages. It is usually computationally too expensive to apply sampling-based planners to these problems since it is difficult to generate valid configurations. We overcome this challenge by redefining the robot\\'s degrees of freedom and constraints into a new set of parameters, called reachable distance space (RD-space), in which all configurations lie in the set of constraint-satisfying subspaces. This enables us to directly sample the constrained subspaces with complexity linear in the robot\\'s number of degrees of freedom. In addition to supporting efficient sampling, we show that the RD-space formulation naturally supports planning, and in particular, we design a local planner suitable for use by sampling-based planners. We demonstrate the effectiveness and efficiency of our approach for several systems including closed chain planning with multiple loops, restricted end effector sampling, and on-line planning for drawing/sculpting. We can sample single-loop closed chain systems with 1000 links in time comparable to open chain sampling, and we can generate samples for 1000-link multi-loop systems of varying topology in less than a second. © 2009 Springer-Verlag.
Baier, Christel; Hermanns, H.; Katoen, Joost P.; Haverkort, Boudewijn R.H.M.
2005-01-01
A continuous-time Markov decision process (CTMDP) is a generalization of a continuous-time Markov chain in which both probabilistic and nondeterministic choices co-exist. This paper presents an efficient algorithm to compute the maximum (or minimum) probability to reach a set of goal states within a
Constructive Sets in Computable Sets
傅育熙
1997-01-01
The original interpretation of the constructive set theory CZF in Martin-Loef's type theory uses the‘extensional identity types’.It is generally believed that these‘types’do not belong to type theory.In this paper it will be shown that the interpretation goes through without identity types.This paper will also show that the interpretation can be given in an intensional type theory.This reflects the computational nature of the interpretation.This computational aspect is reinforced by an ω-Set moel of CZF.
McMahon, Troy
2015-05-01
© 2015 IEEE. Reachable volumes are a new technique that allows one to efficiently restrict sampling to feasible/reachable regions of the planning space even for high degree of freedom and highly constrained problems. However, they have so far only been applied to graph-based sampling-based planners. In this paper we develop the methodology to apply reachable volumes to tree-based planners such as Rapidly-Exploring Random Trees (RRTs). In particular, we propose a reachable volume RRT called RVRRT that can solve high degree of freedom problems and problems with constraints. To do so, we develop a reachable volume stepping function, a reachable volume expand function, and a distance metric based on these operations. We also present a reachable volume local planner to ensure that local paths satisfy constraints for methods such as PRMs. We show experimentally that RVRRTs can solve constrained problems with as many as 64 degrees of freedom and unconstrained problems with as many as 134 degrees of freedom. RVRRTs can solve problems more efficiently than existing methods, requiring fewer nodes and collision detection calls. We also show that it is capable of solving difficult problems that existing methods cannot.
Probabilistic Reachability for Parametric Markov Models
Hahn, Ernst Moritz; Hermanns, Holger; Zhang, Lijun
2011-01-01
Given a parametric Markov model, we consider the problem of computing the rational function expressing the probability of reaching a given set of states. To attack this principal problem, Daws has suggested to first convert the Markov chain into a finite automaton, from which a regular expression...... is computed. Afterwards, this expression is evaluated to a closed form function representing the reachability probability. This paper investigates how this idea can be turned into an effective procedure. It turns out that the bottleneck lies in the growth of the regular expression relative to the number...... of states (n(log n)).We therefore proceed differently, by tightly intertwining the regular expression computation with its evaluation. This allows us to arrive at an effective method that avoids this blow up in most practical cases. We give a detailed account of the approach, also extending to parametric...
Distributed Algorithms for Time Optimal Reachability Analysis
Zhang, Zhengkui; Nielsen, Brian; Larsen, Kim Guldstrand
2016-01-01
Time optimal reachability analysis is a novel model based technique for solving scheduling and planning problems. After modeling them as reachability problems using timed automata, a real-time model checker can compute the fastest trace to the goal states which constitutes a time optimal schedule....... We propose distributed computing to accelerate time optimal reachability analysis. We develop five distributed state exploration algorithms, implement them in \\uppaal enabling it to exploit the compute resources of a dedicated model-checking cluster. We experimentally evaluate the implemented...... algorithms with four models in terms of their ability to compute near- or proven-optimal solutions, their scalability, time and memory consumption and communication overhead. Our results show that distributed algorithms work much faster than sequential algorithms and have good speedup in general....
Safe landing area determination for a Moon lander by reachability analysis
Arslantaş, Yunus Emre; Oehlschlägel, Thimo; Sagliano, Marco
2016-11-01
In the last decades developments in space technology paved the way to more challenging missions like asteroid mining, space tourism and human expansion into the Solar System. These missions result in difficult tasks such as guidance schemes for re-entry, landing on celestial bodies and implementation of large angle maneuvers for spacecraft. There is a need for a safety system to increase the robustness and success of these missions. Reachability analysis meets this requirement by obtaining the set of all achievable states for a dynamical system starting from an initial condition with given admissible control inputs of the system. This paper proposes an algorithm for the approximation of nonconvex reachable sets (RS) by using optimal control. Therefore subset of the state space is discretized by equidistant points and for each grid point a distance function is defined. This distance function acts as an objective function for a related optimal control problem (OCP). Each infinite dimensional OCP is transcribed into a finite dimensional Nonlinear Programming Problem (NLP) by using Pseudospectral Methods (PSM). Finally, the NLPs are solved using available tools resulting in approximated reachable sets with information about the states of the dynamical system at these grid points. The algorithm is applied on a generic Moon landing mission. The proposed method computes approximated reachable sets and the attainable safe landing region with information about propellant consumption and time.
Bidirectional reachability-based modules
Nortje, R
2011-07-01
Full Text Available The authors introduce an algorithm for MinA extraction in EL based on bidirectional reachability. They obtain a significant reduction in the size of modules extracted at almost no additional cost to that of extracting standard reachability...
Reachability analysis of a class of Petri nets using place invariants and siphons
Zhi Wu Li
2013-07-01
Full Text Available This paper proposes a novel and computationally efficient approach to deal with the reachability problem by using place invariants and strict minimal siphons for a class of Petri nets called pipe-line nets (PLNs. First, in a PLN with an appropriate initial marking, the set of invariant markings and the set of strict minimal siphons are enumerated. Then a sufficient and necessary condition is developed to decide whether a marking is spurious by analysing the number of tokens in operation places of any strict minimal siphon and their bounds. Furthermore, an algorithm that generates the reachable markings by removing all the spurious markings from the set of invariant markings is proposed. Finally, experimental results show the efficiency of the proposed method.
Distributed Algorithms for Time Optimal Reachability Analysis
Zhang, Zhengkui; Nielsen, Brian; Larsen, Kim Guldstrand
2016-01-01
. We propose distributed computing to accelerate time optimal reachability analysis. We develop five distributed state exploration algorithms, implement them in \\uppaal enabling it to exploit the compute resources of a dedicated model-checking cluster. We experimentally evaluate the implemented...... algorithms with four models in terms of their ability to compute near- or proven-optimal solutions, their scalability, time and memory consumption and communication overhead. Our results show that distributed algorithms work much faster than sequential algorithms and have good speedup in general....
Predicatively computable functions on sets
Arai, Toshiyasu
2012-01-01
Inspired from a joint work by A. Beckmann, S. Buss and S. Friedman, we propose a class of set-theoretic functions, predicatively computable functions. Each function in this class is polynomial time computable when we restrict to finite binary strings. Moreover a fragment of set theory is given in which \\Sigma_1-definable functions are exactly the functions in the class.
Language-Constraint Reachability Learning in Probabilistic Graphs
Taranto, Claudio; Esposito, Floriana
2012-01-01
The probabilistic graphs framework models the uncertainty inherent in real-world domains by means of probabilistic edges whose value quantifies the likelihood of the edge existence or the strength of the link it represents. The goal of this paper is to provide a learning method to compute the most likely relationship between two nodes in a framework based on probabilistic graphs. In particular, given a probabilistic graph we adopted the language-constraint reachability method to compute the probability of possible interconnections that may exists between two nodes. Each of these connections may be viewed as feature, or a factor, between the two nodes and the corresponding probability as its weight. Each observed link is considered as a positive instance for its corresponding link label. Given the training set of observed links a L2-regularized Logistic Regression has been adopted to learn a model able to predict unobserved link labels. The experiments on a real world collaborative filtering problem proved tha...
Sparse Dataflow Analysis with Pointers and Reachability
Madsen, Magnus; Møller, Anders
2014-01-01
for a sparse analysis framework that supports pointers and reachability. We present such a framework, which uses static single assignment form for heap addresses and computes def-use information on-the-fly. We also show that essential information about dominating definitions can be maintained efficiently using...... quadtrees. The framework is presented as a systematic modification of a traditional dataflow analysis algorithm. Our experimental results demonstrate the effectiveness of the technique for a suite of JavaScript programs. By also comparing the performance with an idealized staged approach that computes...
Iterable Forward Reachability Analysis of Monitor-DPNs
Benedikt Nordhoff
2013-09-01
Full Text Available There is a close connection between data-flow analysis and model checking as observed and studied in the nineties by Steffen and Schmidt. This indicates that automata-based analysis techniques developed in the realm of infinite-state model checking can be applied as data-flow analyzers that interpret complex control structures, which motivates the development of such analysis techniques for ever more complex models. One approach proposed by Esparza and Knoop is based on computation of predecessor or successor sets for sets of automata configurations. Our goal is to adapt and exploit this approach for analysis of multi-threaded Java programs. Specifically, we consider the model of Monitor-DPNs for concurrent programs. Monitor-DPNs precisely model unbounded recursion, dynamic thread creation, and synchronization via well-nested locks with finite abstractions of procedure- and thread-local state. Previous work on this model showed how to compute regular predecessor sets of regular configurations and tree-regular successor sets of a fixed initial configuration. By combining and extending different previously developed techniques we show how to compute tree-regular successor sets of tree-regular sets. Thereby we obtain an iterable, lock-sensitive forward reachability analysis. We implemented the analysis for Java programs and applied it to information flow control and data race detection.
Reachability Analysis of Probabilistic Systems
D'Argenio, P. R.; Jeanett, B.; Jensen, Henrik Ejersbo
2001-01-01
than the original model, and may safely refute or accept the required property. Otherwise, the abstraction is refined and the process repeated. As the numerical analysis involved in settling the validity of the property is more costly than the refinement process, the method profits from applying......We report on new strategies for model checking quantitative reachability properties of Markov decision processes by successive refinements. In our approach, properties are analyzed on abstractions rather than directly on the given model. Such abstractions are expected to be significantly smaller...
Time Optimal Reachability Analysis Using Swarm Verification
Zhang, Zhengkui; Nielsen, Brian; Larsen, Kim Guldstrand
2016-01-01
Time optimal reachability analysis employs model-checking to compute goal states that can be reached from an initial state with a minimal accumulated time duration. The model-checker may produce a corresponding diagnostic trace which can be interpreted as a feasible schedule for many scheduling a...... algorithms work much faster than sequential algorithms, and especially two using combinations of random-depth-first and breadth-first show very promising performance....... search strategies. We develop four swarm algorithms and evaluate them with four models in terms scalability, and time- and memory consumption. Three of these cooperate by exchanging costs of intermediate solutions to prune the search using a branch-and-bound approach. Our results show that swarm...
McMahon, Troy
2014-09-01
© 2014 IEEE. Reachable volumes are a geometric representation of the regions the joints of a robot can reach. They can be used to generate constraint satisfying samples for problems including complicated linkage robots (e.g. closed chains and graspers). They can also be used to assist robot operators and to help in robot design.We show that reachable volumes have an O(1) complexity in unconstrained problems as well as in many constrained problems. We also show that reachable volumes can be computed in linear time and that reachable volume samples can be generated in linear time in problems without constraints. We experimentally validate reachable volume sampling, both with and without constraints on end effectors and/or internal joints. We show that reachable volume samples are less likely to be invalid due to self-collisions, making reachable volume sampling significantly more efficient for higher dimensional problems. We also show that these samples are easier to connect than others, resulting in better connected roadmaps. We demonstrate that our method can be applied to 262-dof, multi-loop, and tree-like linkages including combinations of planar, prismatic and spherical joints. In contrast, existing methods either cannot be used for these problems or do not produce good quality solutions.
Edwards, Alistair
2006-01-01
This book is aimed at students who are thinking of studying Computer Science or a related topic at university. Part One is a brief introduction to the topics that make up Computer Science, some of which you would expect to find as course modules in a Computer Science programme. These descriptions should help you to tell the difference between Computer Science as taught in different departments and so help you to choose a course that best suits you. Part Two builds on what you have learned about the nature of Computer Science by giving you guidance in choosing universities and making your appli
Stochastic Reachability Analysis of Hybrid Systems
Bujorianu, Luminita Manuela
2012-01-01
Stochastic reachability analysis (SRA) is a method of analyzing the behavior of control systems which mix discrete and continuous dynamics. For probabilistic discrete systems it has been shown to be a practical verification method but for stochastic hybrid systems it can be rather more. As a verification technique SRA can assess the safety and performance of, for example, autonomous systems, robot and aircraft path planning and multi-agent coordination but it can also be used for the adaptive control of such systems. Stochastic Reachability Analysis of Hybrid Systems is a self-contained and accessible introduction to this novel topic in the analysis and development of stochastic hybrid systems. Beginning with the relevant aspects of Markov models and introducing stochastic hybrid systems, the book then moves on to coverage of reachability analysis for stochastic hybrid systems. Following this build up, the core of the text first formally defines the concept of reachability in the stochastic framework and then...
Reachability modules for the description logic SRIQ
Nortje, R
2013-12-01
Full Text Available In this paper we investigate module extraction for the Description Logic SRIQ. We formulate modules in terms of the reachability problem for directed hypergraphs. Using inseperability relations, we investigate the module-theoretic properties...
RAPID: A Reachable Anytime Planner for Imprecisely-sensed Domains
Brunskill, Emma
2012-01-01
Despite the intractability of generic optimal partially observable Markov decision process planning, there exist important problems that have highly structured models. Previous researchers have used this insight to construct more efficient algorithms for factored domains, and for domains with topological structure in the flat state dynamics model. In our work, motivated by findings from the education community relevant to automated tutoring, we consider problems that exhibit a form of topological structure in the factored dynamics model. Our Reachable Anytime Planner for Imprecisely-sensed Domains (RAPID) leverages this structure to efficiently compute a good initial envelope of reachable states under the optimal MDP policy in time linear in the number of state variables. RAPID performs partially-observable planning over the limited envelope of states, and slowly expands the state space considered as time allows. RAPID performs well on a large tutoring-inspired problem simulation with 122 state variables, cor...
Stochastic observability, reconstructibility, controllability, and reachability
Liu, Andrew R.
2011-01-01
This thesis formulates versions of observability, reconstructibility, controllability, and reachability for stochastic linear and nonlinear systems. The concepts of observability and reconstructibility concern whether the measurements of a system suffice to construct a complete characterization of the system behavior while the concepts of controllability and reachability concern whether the actuation of the system suffices to cause the system to behave according to various user specifications...
Reachability problems for communicating finite state machines
Pachl, Jan
2012-01-01
The paper deals with the verification of reachability properties in a commonly used state transition model of communication protocols, which consists of finite state machines connected by potentially unbounded FIFO channels. Although simple reachability problems are undecidable for general protocols with unbounded channels, they are decidable for the protocols with the recognizable channel property. The decidability question is open for the protocols with the rational channel property.
Efficient Reachability Query Evaluation in Large Spatiotemporal Contact Datasets
Shirani-Mehr, Houtan; Shahabi, Cyrus
2012-01-01
With the advent of reliable positioning technologies and prevalence of location-based services, it is now feasible to accurately study the propagation of items such as infectious viruses, sensitive information pieces, and malwares through a population of moving objects, e.g., individuals, mobile devices, and vehicles. In such application scenarios, an item passes between two objects when the objects are sufficiently close (i.e., when they are, so-called, in contact), and hence once an item is initiated, it can penetrate the object population through the evolving network of contacts among objects, termed contact network. In this paper, for the first time we define and study reachability queries in large (i.e., disk-resident) contact datasets which record the movement of a (potentially large) set of objects moving in a spatial environment over an extended time period. A reachability query verifies whether two objects are "reachable" through the evolving contact network represented by such contact datasets. We p...
Computational Topology for Regular Closed Sets
project, The I-TANGO; :; Peters, T J; Bisceglio, J.; Ferguson, D. R.; Hoffmann, C.M.; Maekawa, T.; Patrikalakis, N.M.; Sakkalis, T.; N F Stewart
2004-01-01
The Boolean algebra of regular closed sets is prominent in topology, particularly as a dual for the Stone-Cech compactification. This algebra is also central for the theory of geometric computation, as a representation for combinatorial operations on geometric sets. However, the issue of computational approximation introduces unresolved subtleties that do not occur within "pure" topology. One major effort towards reconciling this mathematical theory with computational practice is our ongoing ...
Robust Reachability of Boolean Control Networks.
Li, Fangfei; Tang, Yang
2016-04-20
Boolean networks serve a powerful tool in analysis of genetic regulatory networks since it emphasizes the fundamental principles and establishes a nature framework for capturing the dynamics of regulation of cellular states. In this paper, the robust reachability of Boolean control networks is investigated by means of semi-tensor product. Necessary and sufficient conditions for the robust reachability of Boolean control networks are provided, in which control inputs relying on disturbances or not are considered, respectively. Besides, the corresponding control algorithms are developed for these two cases. A reduced model of the lac operon in the Escherichia coli is presented to show the effectiveness of the presented results.
On the Parallel Computation of Characteristic Set
WUYongwei; YANGGuangwen; LINDongdai; HUANGQifeng; ZHENGWeimin
2004-01-01
As one of the efficient approaches for computing various zero decompositions of any set of multivarlable polynomials, Characteristic set (CS) computation is very compute intensive. In this paper, our purpose is to build an effective parallel computation model for CS. We first present the parallel algorithm for CS computation. Then the term of Polynomial complexity grade (PCG) is defined, and our load balance strategy is put forward based on it as well, On the analysis of two basic transmission methods, we construct one efficient hybrid method for the data communication of parallel CS computation. At last，experiments and timing data have demonstrated the stability and high performance of our parallel computation model for CS method.
Reachability via Compositionality in Petri nets
Sobocinski, Paweł; Stephens, Owen
2013-01-01
We introduce a novel technique for checking reachability in Petri nets that relies on a recently introduced compositional algebra of nets. We prove that the technique is correct, and discuss our implementation. We report promising experimental results on some well-known examples.
Reachability Analysis of Sampling Based Planners
Geraerts, R.J.; Overmars, M.H.
2005-01-01
The last decade, sampling based planners like the Probabilistic Roadmap Method have proved to be successful in solving complex motion planning problems. We give a reachability based analysis for these planners which leads to a better understanding of the success of the approach and enhancements of t
Reachability analysis of switched linear discrete singular systems
无
2006-01-01
This paper studies the reachability problem of the switched linear discrete singular (SLDS) systems. Under the condition that all subsystems are regular, the reachability of the SLDS systems is characterized based on a peculiar repeatedly introduced switching sequence. The necessary and sufficient conditions are obtained for the reachability of the SLDS systems.
Eternal Domination: Criticality and Reachability
Klostermeyer William F.
2017-02-01
Full Text Available We show that for every minimum eternal dominating set, D, of a graph G and every vertex v ∈ D, there is a sequence of attacks at the vertices of G which can be defended in such a way that an eternal dominating set not containing v is reached. The study of the stronger assertion that such a set can be reached after a single attack is defended leads to the study of graphs which are critical in the sense that deleting any vertex reduces the eternal domination number. Examples of these graphs and tight bounds on connectivity, edge-connectivity and diameter are given. It is also shown that there exist graphs in which deletion of any edge increases the eternal domination number, and graphs in which addition of any edge decreases the eternal domination number.
Boolean computation of optimum hitting sets
Hulme, B.L.; Baca, L.S.; Shiver, A.W.; Worrell, R.B.
1984-04-01
This report presents the results of computational experience in solving weighted hitting set problems by Boolean algebraic methods. The feasible solutions are obtained by Boolean formula manipulations, and the optimum solutions are obtained by comparing the weight sums of the feasible solutions. Both the algebra and the optimization can be accomplished using the SETS language. One application is to physical protection problems. 8 references, 2 tables.
Does target viewing time influence perceived reachability?
Gabbard, Carl; Ammar, Diala
2007-09-01
This study examined the influence of target viewing time on perceived (estimates of) reachability. Right-handed participants were asked to judge the simulated reachability of midline targets using their dominant limb in viewing conditions of 150 ms, 500 ms, 1 s and 2 s. Responses were compared to actual maximum reach. In reference to percent error, interestingly, the 150 ms condition revealed the least error at peripersonal targets and the most inaccuracy with distal (extrapersonal) targets. This condition was also distinct with a significant overestimation bias -- a common observation in earlier studies. However, with increasing viewing time this bias was reduced. These data provide evidence that 150 ms is effective for estimating reach within one's general peripersonal workspace. However, with judgments distal from that point, more time enhanced accuracy, with 500 ms and 1 s being optimal. Overall results are discussed relative to perceptual effectiveness in programming reaching movements.
Goreac, D
2010-01-01
We aim at characterizing viability, invariance and some reachability properties of controlled piecewise deterministic Markov processes (PDMPs). Using analytical methods from the theory of viscosity solutions, we establish criteria for viability and invariance in terms of the first order normal cone. We also investigate reachability of arbitrary open sets. The method is based on viscosity techniques and duality for some associated linearized problem. The theoretical results are applied to general On/Off systems, Cook's model for haploinssuficiency, and a stochastic model for bacteriophage lambda.
Computational Unified Set Theory and Application
Zhang Jiang; Li Xuewei; He Zhongxiong
2006-01-01
The computational unified set model (CUSM) as the latest progress of Unified Set theory is introduced in this paper. The model combines unified set theory, information granule, complex adaptive system and cognitive science to present a new approach to simulate the cognition of human beings that can be viewed as the evolutionary process through the automatic learning from data sets. The information granule, which is the unit of cognition in CUSM, can be synthesized and created by the basic operators. It also can form the granule network by linking with other granules. With the learning from database, the system can evolve under the pressure of selection. As the adaptive results, fuzzy sets, vague sets and rough sets, etc can emerge out spontaneously. The CUSM answers the question of the origin of the uncertainties in thinking process described by unified set theory, that is due to the emergent properties of a holistic system of multiple cognitive units. And also the CUSM creates a dynamic model that can adapt to the environment. As a result, the "closed world" limitation in machine learning may be broken. The paper also discusses the applications of CUSM in rules discovery, problem solving, clustering analysis and data mining etc. The main features of the model comparing with the classical approaches toward those problems are its adaptability, flexibility and robustness but not accuracy.
Reachability for Finite-State Process Algebras Using Static Analysis
Skrypnyuk, Nataliya; Nielson, Flemming
2011-01-01
In this work we present an algorithm for solving the reachability problem in finite systems that are modelled with process algebras. Our method uses Static Analysis, in particular, Data Flow Analysis, of the syntax of a process algebraic system with multi-way synchronisation. The results...... of the Data Flow Analysis are used in order to “cut off” some of the branches in the reachability analysis that are not important for determining, whether or not a state is reachable. In this way, it is possible for our reachability algorithm to avoid building large parts of the system altogether and still...
Robust level set method for computer vision
Si, Jia-rui; Li, Xiao-pei; Zhang, Hong-wei
2005-12-01
Level set method provides powerful numerical techniques for analyzing and solving interface evolution problems based on partial differential equations. It is particularly appropriate for image segmentation and other computer vision tasks. However, there exists noise in every image and the noise is the main obstacle to image segmentation. In level set method, the propagation fronts are apt to leak through the gaps at locations of missing or fuzzy boundaries that are caused by noise. The robust level set method proposed in this paper is based on the adaptive Gaussian filter. The fast marching method provides a fast implementation for level set method and the adaptive Gaussian filter can adapt itself to the local characteristics of an image by adjusting its variance. Thus, the different parts of an image can be smoothed in different way according to the degree of noisiness and the type of edges. Experiments results demonstrate that the adaptive Gaussian filter can greatly reduce the noise without distorting the image and made the level set methods more robust and accurate.
Reachability by paths of bounded curvature in a convex polygon
Ahn, Heekap
2012-01-01
Let B be a point robot moving in the plane, whose path is constrained to forward motions with curvature at most 1, and let P be a convex polygon with n vertices. Given a starting configuration (a location and a direction of travel) for B inside P, we characterize the region of all points of P that can be reached by B, and show that it has complexity O(n). We give an O(n2) time algorithm to compute this region. We show that a point is reachable only if it can be reached by a path of type CCSCS, where C denotes a unit circle arc and S denotes a line segment. © 2011 Elsevier B.V.
Reachability cuts for the vehicle routing problem with time windows
Lysgaard, Jens
2004-01-01
This paper introduces a class of cuts, called reachability cuts, for the Vehicle Routing Problem with Time Windows (VRPTW). Reachability cuts are closely related to cuts derived from precedence constraints in the Asymmetric Traveling Salesman Problem with Time Windows and to k-path cuts...
Reachability cuts for the vehicle routing problem with time windows
Lysgaard, Jens
2004-01-01
This paper introduces a class of cuts, called reachability cuts, for the Vehicle Routing Problem with Time Windows (VRPTW). Reachability cuts are closely related to cuts derived from precedence constraints in the Asymmetric Traveling Salesman Problem with Time Windows and to k-path cuts...
Reachable Distance Space: Efficient Sampling-Based Planning for Spatially Constrained Systems
Xinyu Tang,
2010-01-25
Motion planning for spatially constrained robots is difficult due to additional constraints placed on the robot, such as closure constraints for closed chains or requirements on end-effector placement for articulated linkages. It is usually computationally too expensive to apply sampling-based planners to these problems since it is difficult to generate valid configurations. We overcome this challenge by redefining the robot\\'s degrees of freedom and constraints into a new set of parameters, called reachable distance space (RD-space), in which all configurations lie in the set of constraint-satisfying subspaces. This enables us to directly sample the constrained subspaces with complexity linear in the number of the robot\\'s degrees of freedom. In addition to supporting efficient sampling of configurations, we show that the RD-space formulation naturally supports planning and, in particular, we design a local planner suitable for use by sampling-based planners. We demonstrate the effectiveness and efficiency of our approach for several systems including closed chain planning with multiple loops, restricted end-effector sampling, and on-line planning for drawing/sculpting. We can sample single-loop closed chain systems with 1,000 links in time comparable to open chain sampling, and we can generate samples for 1,000-link multi-loop systems of varying topologies in less than a second. © 2010 The Author(s).
Periodically-Scheduled Controller Analysis using Hybrid Systems Reachability and Continuization
2015-12-01
algorithm is run, and actuator outputs are set. The physical world , on the other hand, evolves continuously. Models of the physical world may be given...An extra clock variable, c, is added to the hybrid automaton that ticks at rate one (ċ = 1). When the clock reaches the period, a transition is...Preliminary Reachability Analysis Although hybrid automata can model real-time scheduled controllers and plants as shown above, an important factor is
Reachability Analysis of Time Basic Petri Nets: a Time Coverage Approach
Bellettini, Carlo
2011-01-01
We introduce a technique for reachability analysis of Time-Basic (TB) Petri nets, a powerful formalism for real- time systems where time constraints are expressed as intervals, representing possible transition firing times, whose bounds are functions of marking's time description. The technique consists of building a symbolic reachability graph relying on a sort of time coverage, and overcomes the limitations of the only available analyzer for TB nets, based in turn on a time-bounded inspection of a (possibly infinite) reachability-tree. The graph construction algorithm has been automated by a tool-set, briefly described in the paper together with its main functionality and analysis capability. A running example is used throughout the paper to sketch the symbolic graph construction. A use case describing a small real system - that the running example is an excerpt from - has been employed to benchmark the technique and the tool-set. The main outcome of this test are also presented in the paper. Ongoing work, ...
Oishi, Meeko M.
2006-08-01
This document describes new advances in hybrid reachability techniques accomplished during the course of a one-year Truman Postdoctoral Fellowship. These techniques provide guarantees of safety in complex systems, which is especially important in high-risk, expensive, or safety-critical systems. My work focused on new approaches to two specific problems motivated by real-world issues in complex systems: (1) multi-objective controller synthesis, and (2) control for recovery from error. Regarding the first problem, a novel application of reachability analysis allowed controller synthesis in a single step to achieve (a) safety, (b) stability, and (c) prevent input saturation. By extending the state to include the input parameters, constraints for stability, saturation, and envelope protection are incorporated into a single reachability analysis. Regarding the second problem, a new approach to the problem of recovery provides (a) states from which recovery is possible, and (b) controllers to guide the system during a recovery maneuver from an error state to a safe state in minimal time. Results are computed in both problems on nonlinear models of single longitudinal aircraft dynamics and two-aircraft lateral collision avoidance dynamics.
Minimum-Cost Reachability for Priced Timed Automata
Behrmann, Gerd; Fehnker, Ansgar; Hune, Thomas Seidelin
2001-01-01
This paper introduces the model of linearly priced timed automata as an extension of timed automata, with prices on both transitions and locations. For this model we consider the minimum-cost reachability problem: i.e. given a linearly priced timed automaton and a target state, determine...... the minimum cost of executions from the initial state to the target state. This problem generalizes the minimum-time reachability problem for ordinary timed automata. We prove decidability of this problem by offering an algorithmic solution, which is based on a combination of branch-and-bound techniques...... and a new notion of priced regions. The latter allows symbolic representation and manipulation of reachable states together with the cost of reaching them....
A Parametric Modelling Method for Dexterous Finger Reachable Workspaces
Wenzhen Yang
2016-03-01
Full Text Available The well-known algorithms, such as the graphic method, analytical method or numerical method, have some defects when modelling the dexterous finger workspace, which is a significant kinematical feature of dexterous hands and valuable for grasp planning, motion control and mechanical design. A novel modelling method with convenient and parametric performances is introduced to generate the dexterous-finger reachable workspace. This method constructs the geometric topology of the dexterous-finger reachable workspace, and uses a joint feature recognition algorithm to extract the kinematical parameters of the dexterous finger. Compared with graphic, analytical and numerical methods, this parametric modelling method can automatically and conveniently construct a more vivid workspace’ forms and contours of the dexterous finger. The main contribution of this paper is that a workspace-modelling tool with high interactive efficiency is developed for designers to precisely visualize the dexterous-finger reachable workspace, which is valuable for analysing the flexibility of the dexterous finger.
Computed radiography in an emergency department setting
Andriole, Katherine P.; Gould, Robert G.; Arenson, Ronald L.
1997-05-01
Evaluation of radiologist and non-radiologist physician acceptance of computed radiography (CR) as an alternative to film-based radiography in an emergency department (ED) is performed. All emergency department radiographs are performed using photostimulable phosphor plates and rad by a computed radiography laser reader placed in the former emergency department darkroom. Soft copy images are simultaneously transmitted to high- and medium-resolution dual-monitor display stations located in radiology and ED reading rooms respectively. The on-call radiologist is automatically paged by the Radiology Information System (RIS) upon exam completion, to read the new ED imaging study. Patient demographic information including relevant clinical history is conveyed to the radiologist via the RIS. A 'wet read' preliminary radiology report is immediately transmitted back to the ED. Radiology and ED physicians are surveyed to ascertain preferences for CR or traditional screen-film, based on system implementation, image viewing and clinical impact issues. Preliminary results indicate a preference for filmless CR among the ED physicians if digital reliability and speed issues are met. This preference appears to be independent of physician level of experience. Inexperienced radiologists-in-training appear to have less comfort with softcopy reading for primary diagnosis. However, additional training in softcopy reading techniques can improve confidences. Image quality issues are most important tot he radiologist, while speed and reliability are the major issues for ED physicians. Reasons for CR preference include immediate access to images on display stations, near-zero exam retake rates, and improved response time and communication between radiology and the emergency department clinician.
Set operads in combinatorics and computer science
Méndez, Miguel A
2015-01-01
This monograph has two main objectives. The first one is to give a self-contained exposition of the relevant facts about set operads, in the context of combinatorial species and its operations. This approach has various advantages: one of them is that the definition of combinatorial operations on species, product, sum, substitution and derivative, are simple and natural. They were designed as the set theoretical counterparts of the homonym operations on exponential generating functions, giving an immediate insight on the combinatorial meaning of them. The second objective is more ambitious. Before formulating it, authors present a brief historic account on the sources of decomposition theory. For more than forty years decompositions of discrete structures have been studied in different branches of discrete mathematics: combinatorial optimization, network and graph theory, switching design or boolean functions, simple multi-person games and clutters, etc.
Multi-scale modeling of follicular ovulation as a reachability problem
Echenim, Nki; Sorine, Michel
2007-01-01
During each ovarian cycle, only a definite number of follicles ovulate, while the others undergo a degeneration process called atresia. We have designed a multi-scale mathematical model where ovulation and atresia result from a hormonal controlled selection process. A 2D-conservation law describes the age and maturity structuration of the follicular cell population. In this paper, we focus on the operating mode of the control, through the study of the characteristics of the conservation law. We describe in particular the set of microscopic initial conditions leading to the macroscopic phenomenon of either ovulation or atresia, in the framework of backwards reachable sets theory.
On Reachability for Hybrid Automata over Bounded Time
Brihaye, Thomas; Geeraerts, Gilles; Ouaknine, Joël; Raskin, Jean-François; Worrell, James
2011-01-01
This paper investigates the time-bounded version of the reachability problem for hybrid automata. This problem asks whether a given hybrid automaton can reach a given target location within T time units, where T is a constant rational value. We show that, in contrast to the classical (unbounded) reachability problem, the timed-bounded version is decidable for rectangular hybrid automata provided only non-negative rates are allowed. This class of systems is of practical interest and subsumes, among others, the class of stopwatch automata. We also show that the problem becomes undecidable if either diagonal constraints or both negative and positive rates are allowed.
Mobility Tolerant Firework Routing for Improving Reachability in MANETs
Gen Motoyoshi
2014-03-01
Full Text Available In this paper, we investigate our mobility-assisted and adaptive broadcast routing mechanism, called Mobility Tolerant Firework Routing (MTFR, which utilizes the concept of potentials for routing and improves node reachability, especially in situations with high mobility, by including a broadcast mechanism. We perform detailed evaluations by simulations in a mobile environment and demonstrate the advantages of MTFR over conventional potential-based routing. In particular, we show that MTFR produces better reachability in many aspects at the expense of a small additional transmission delay and intermediate traffic overhead, making MTFR a promising routing protocol and feasible for future mobile Internet infrastructures.
The Effects of Handedness and Reachability on Perceived Distance
Linkenauger, Sally A.; Witt, Jessica K.; Stefanucci, Jeanine K.; Bakdash, Jonathan Z.; Proffitt, Dennis R.
2009-01-01
Previous research has suggested that perceived distances are scaled by the action capabilities of the body. The present studies showed that when "reachability" is constrained due to a difficult grasp required to pick up an object, perceived distance to the object increases. Participants estimated the distances to tools with handle…
Reachability analysis for timed automata using max-plus algebra
Lu, Qi; Madsen, Michael; Milata, Martin
2012-01-01
We show that max-plus polyhedra are usable as a data structure in reachability analysis of timed automata. Drawing inspiration from the extensive work that has been done on difference bound matrices, as well as previous work on max-plus polyhedra in other areas, we develop the algorithms needed t...
Observabilities and reachabilities of nonlinear DEDS and coloring graphs
无
2001-01-01
From nonlinear discrete event dynamic systems with the applicablebackground of a large-scale digital integrated circuit, a new conception of coloring graphs on the system is advanced, the necessary and sufficient condition of upper-level observability is given, and the necessary and sufficient condition of respective reachability is simplified and improved.
Reachability Trees for High-level Petri Nets
Jensen, Kurt; Jensen, Arne M.; Jepsen, Leif Obel;
1986-01-01
the necessary analysis methods. In other papers it is shown how to generalize the concept of place- and transition invariants from place/transition nets to high-level Petri nets. Our present paper contributes to this with a generalization of reachability trees, which is one of the other important analysis...
Reachability for Finite-state Process Algebras Using Horn Clauses
Skrypnyuk, Nataliya; Nielson, Flemming
2013-01-01
In this work we present an algorithm for solving the reachability problem in finite systems that are modelled with process algebras. Our method is based on Static Analysis, in particular, Data Flow Analysis, of the syntax of a process algebraic system with multi-way synchronisation. The results...
Multi-Core BDD Operations for Symbolic Reachability
van Dijk, Tom; Laarman, Alfons; van de Pol, Jan Cornelis; Heljanko, K.; Knottenbelt, W.J.
2012-01-01
This paper presents scalable parallel BDD operations for modern multi-core hardware. We aim at increasing the performance of reachability analysis in the context of model checking. Existing approaches focus on performing multiple independent BDD operations rather than parallelizing the BDD
Winning Concurrent Reachability Games Requires Doubly-Exponential Patience
Hansen, Kristoffer Arnsfelt; Koucký, Michal; Miltersen, Peter Bro
2009-01-01
We exhibit a deterministic concurrent reachability game PURGATORYn with n non-terminal positions and a binary choice for both players in every position so that any positional strategy for Player 1 achieving the value of the game within given isin < 1/2 must use non-zero behavior probabilities tha...
Minimum-Cost Reachability for Priced Timed Automata
Behrmann, Gerd; Fehnker, Ansgar; Hune, Thomas Seidelin;
2001-01-01
This paper introduces the model of linearly priced timed automata as an extension of timed automata, with prices on both transitions and locations. For this model we consider the minimum-cost reachability problem: i.e. given a linearly priced timed automaton and a target state, determine the mini......This paper introduces the model of linearly priced timed automata as an extension of timed automata, with prices on both transitions and locations. For this model we consider the minimum-cost reachability problem: i.e. given a linearly priced timed automaton and a target state, determine...... the minimum cost of executions from the initial state to the target state. This problem generalizes the minimum-time reachability problem for ordinary timed automata. We prove decidability of this problem by offering an algorithmic solution, which is based on a combination of branch-and-bound techniques...... and a new notion of priced regions. The latter allows symbolic representation and manipulation of reachable states together with the cost of reaching them....
Optimal Conditional Reachability for Multi-Priced Timed Automata
Larsen, Kim Guldstrand; Rasmussen, Jacob Illum
2005-01-01
In this paper, we prove decidability of the optimal conditional reachability problem for multi-priced timed automata, an extension of timed automata with multiple cost variables evolving according to given rates for each location. More precisely, we consider the problem of determining the minimal...
The Cost of Parameterized Reachability in Mobile Ad Hoc Networks
Delzanno, Giorgio; Traverso, Riccardo; Zavattaro, Gianluigi
2012-01-01
We investigate the impact of spontaneous movement in the complexity of verification problems for an automata-based protocol model of networks with selective broadcast communication. We first consider reachability of an error state and show that parameterized verification is decidable with polynomial complexity. We then move to richer queries and show how the complexity changes when considering properties with negation or cardinality constraints.
Delta-Complete Reachability Analysis (Part 1)
2013-12-01
Lecture Notes in Computer Science , pages...and G. Rozenberg, editors, REX Workshop, volume 600 of Lecture Notes in Computer Science , pages 45–73. Springer, 1991. [4] M. Fränzle. Analysis of...hybrid systems: An ounce of realism can save an infinity of states. In J. Flum and M. Rodrı́guez-Artalejo, editors, CSL, volume 1683 of Lecture Notes in Computer Science ,
Transition-based deadlock control policy using reachability graph for flexible manufacturing systems
Xiuyan Zhang
2016-02-01
Full Text Available Most existing deadlock prevention policies deal with deadlock problems arising in flexible manufacturing systems modeled with Petri nets by adding control places. Based on the reachability graph analysis, this article proposes a novel deadlock control policy that recovers the system from deadlock and livelock states to legal states and reaches the same number of states as the original plant model by adding control transitions. In order to reduce the structural complexity of the supervisor, a set covering approach is developed to minimize the number of control transitions. Finally, two flexible manufacturing system examples are presented to illustrate the proposed approach.
High Interactivity Visualization Software for Large Computational Data Sets Project
National Aeronautics and Space Administration — We propose to develop a collection of computer tools and libraries called SciViz that enable researchers to visualize large scale data sets on HPC resources remotely...
Simple and Faster algorithm for Reachability in a Decremental Directed Graph
Gupta, Manoj
2015-01-01
Consider the problem of maintaining source sink reachability($st$-Reachability), single source reachability(SSR) and strongly connected component(SCC) in an edge decremental directed graph. In particular, we design a randomized algorithm that maintains with high probability: 1) $st$-Reachability in $\\tilde{O}(mn^{4/5})$ total update time. 2) $st$-Reachability in a total update time of $\\tilde{O}(n^{8/3})$ in a dense graph. 3) SSR in a total update time of $\\tilde{O}(m n^{9/10})$. 4) SCC in a ...
Basis Set Exchange: A Community Database for Computational Sciences
Schuchardt, Karen L.; Didier, Brett T.; Elsethagen, Todd O.; Sun, Lisong; Gurumoorthi, Vidhya; Chase, Jared M.; Li, Jun; Windus, Theresa L.
2007-05-01
Basis sets are one of the most important input data for computational models in the chemistry, materials, biology and other science domains that utilize computational quantum mechanics methods. Providing a shared, web accessible environment where researchers can not only download basis sets in their required format, but browse the data, contribute new basis sets, and ultimately curate and manage the data as a community will facilitate growth of this resource and encourage sharing both data and knowledge. We describe the Basis Set Exchange (BSE), a web portal that provides advanced browsing and download capabilities, facilities for contributing basis set data, and an environment that incorporates tools to foster development and interaction of communities. The BSE leverages and enables continued development of the basis set library originally assembled at the Environmental Molecular Sciences Laboratory.
Basis set exchange: a community database for computational sciences.
Schuchardt, Karen L; Didier, Brett T; Elsethagen, Todd; Sun, Lisong; Gurumoorthi, Vidhya; Chase, Jared; Li, Jun; Windus, Theresa L
2007-01-01
Basis sets are some of the most important input data for computational models in the chemistry, materials, biology, and other science domains that utilize computational quantum mechanics methods. Providing a shared, Web-accessible environment where researchers can not only download basis sets in their required format but browse the data, contribute new basis sets, and ultimately curate and manage the data as a community will facilitate growth of this resource and encourage sharing both data and knowledge. We describe the Basis Set Exchange (BSE), a Web portal that provides advanced browsing and download capabilities, facilities for contributing basis set data, and an environment that incorporates tools to foster development and interaction of communities. The BSE leverages and enables continued development of the basis set library originally assembled at the Environmental Molecular Sciences Laboratory.
Sparse Dataflow Analysis with Pointers and Reachability
Madsen, Magnus; Møller, Anders
2014-01-01
quadtrees. The framework is presented as a systematic modification of a traditional dataflow analysis algorithm. Our experimental results demonstrate the effectiveness of the technique for a suite of JavaScript programs. By also comparing the performance with an idealized staged approach that computes...
Improved Undecidability Results for Reachability Games on Recursive Timed Automata
Shankara Narayanan Krishna
2014-08-01
Full Text Available We study reachability games on recursive timed automata (RTA that generalize Alur-Dill timed automata with recursive procedure invocation mechanism similar to recursive state machines. It is known that deciding the winner in reachability games on RTA is undecidable for automata with two or more clocks, while the problem is decidable for automata with only one clock. Ouaknine and Worrell recently proposed a time-bounded theory of real-time verification by claiming that restriction to bounded-time recovers decidability for several key decision problem related to real-time verification. We revisited games on recursive timed automata with time-bounded restriction in the hope of recovering decidability. However, we found that the problem still remains undecidable for recursive timed automata with three or more clocks. Using similar proof techniques we characterize a decidability frontier for a generalization of RTA to recursive stopwatch automata.
User-interfaces for hybrid systems: Analysis and design through hybrid reachability
Oishi, Meeko Mitsuko Karen
Hybrid systems combine discrete state dynamics, which model mode switching, with continuous state dynamics, which model the physical processes themselves. Applications of hybrid system theory to automated systems have traditionally assumed that the controller itself is an automaton which runs in parallel with the system under control. We model human interaction with hybrid systems, which involves the user; the automation's discrete mode-logic, and the underlying continuous dynamics of the physical system. Often in safety-critical systems, user-interfaces display a reduced set of information about the entire system, however must still provide adequate information and must not confuse the user. We present (1) a method of designing a discrete event system abstraction of the hybrid system, in order to verify or design user-interfaces for hybrid human-automation systems, and (2) the relationship between user-interfaces and discrete observability properties. Using a hybrid computational tool for reachability, we find the largest region in which the system can always remain---this is the safe region of operation. By implementing a controller which arises from this computation, we mathematically guarantee that this safe region is invariant. Assigning discrete states to the computed invariant regions, we create a discrete event system from this hybrid system with safety restrictions. This abstraction can then be used in existing interface verification and design methods. A user-interface, modeled as a discrete system, must, not only be reduced (extraneous information has been eliminated), but also "immediately observable". We derive conditions for immediate observability, in which the current state can be constructed from the current output and last occurring event. Based on finite state machine state-reduction techniques, we synthesize an output for remote user-interfaces which fulfills this property. Aircraft are prime examples of complex, safety-critical systems. In
M. De la Sen
2007-01-01
Full Text Available This paper investigates the properties of reachability, observability, controllability, and constructibility of positive discrete-time linear time-invariant dynamic systems when the sampling instants are chosen aperiodically. Reachability and observability hold if and only if a relevant matrix defining each of those properties is monomial for the set of chosen sampling instants provided that the continuous-time system is positive. Controllability and constructibility hold globally only asymptotically under close conditions to the above ones guaranteeing reachability/observability provided that the matrix of dynamics of the continuous-time system, required to be a Metzler matrix for the system's positivity, is furthermore a stability matrix while they hold in finite time only for regions excluding the zero vector of the first orthant of the state space or output space, respectively. Some related properties can be deduced for continuous-time systems and for piecewise constant discrete-time ones from the above general framework.
A Parametric Modelling Method for Dexterous Finger Reachable Workspaces
2016-01-01
The well-known algorithms, such as the graphic method, analytical method or numerical method, have some defects when modelling the dexterous finger workspace, which is a significant kinematical feature of dexterous hands and valuable for grasp planning, motion control and mechanical design. A novel modelling method with convenient and parametric performances is introduced to generate the dexterous-finger reachable workspace. This method constructs the geometric topology of the dexterous-finge...
Computing Convex Coverage Sets for Multi-Objective Coordination Graphs
D.M. Roijers; S. Whiteson; F.A. Oliehoek
2013-01-01
Many real-world decision problems require making trade-offs between multiple objectives. However, in some cases, the relative importance of the objectives is not known when the problem is solved, precluding the use of single-objective methods. Instead, multi-objective methods, which compute the set
Sensory emission rates from personal computers and television sets
Wargocki, Pawel; Bako-Biro, Zsolt; Baginska, S.
2003-01-01
Sensory emissions from personal computers (PCs), PC monitors + PC towers, and television sets (TVs) having been in operation for 50, 400 and 600 h were assessed by a panel of 48 subjects. One brand of PC tower and four brands of PC monitors were tested. Within each brand, cathode-ray tube (CRT...
Affective Computing Model for the Set Pair Users on Twitter
Chunying Zhang
2013-01-01
Full Text Available Affective computing is the calculation about sentiment, sentiment generated and the aspects of affecting the sentiment. However, the different factors often cause the uncertainty of sentiment expression of the users. Today twitter as the information media of real-time and timely has become better sentiment expression vector for users themselves. Therefore, in allusion to the diversity of sentiment form of twitter information to express sentiment, this paper constructs affective computing model, starting from the differences of the constituted form of Twitter based on set pair theory to make analysis and calculation for user sentiment, from the text, emoticon, picture information and other multi-angle to analyze the positive, negative and uncertain emotion of the users for the signal twitter, consolidating the weight of various parts in emotional information, building hierarchical set pair affective computing model for twitter users, to offer more favorable data support for the relevant departments and businesses.
Optimal Conditional Reachability for Multi-Priced Timed Automata
Larsen, Kim Guldstrand; Rasmussen, Jacob Illum
2005-01-01
In this paper, we prove decidability of the optimal conditional reachability problem for multi-priced timed automata, an extension of timed automata with multiple cost variables evolving according to given rates for each location. More precisely, we consider the problem of determining the minimal...... cost of reaching a given target state, with respect to some primary cost variable, while respecting upper bound constraints on the remaining (secondary) cost variables. Decidability is proven by constructing a zone-based algorithm that always terminates while synthesizing the optimal cost with a single...... secondary cost variable. The approach is then lifted to any number of secondary cost variables....
Reachability-based impact as a measure for insiderness
Probst, Christian W.; Hansen, René Rydhof
2013-01-01
Insider threats pose a difficult problem for many organisations. While organisations in principle would like to judge the risk posed by a specific insider threat, this is in general not possible. This limitation is caused partly by the lack of models for human behaviour, partly by restrictions...... of impact of an insider, and present different realisations of impact. The suggested approach results in readily usable techniques that allow to get a quick overview of potential insider threats based on locations and assets reachable by employees. We present several variations ranging from pure...
Cumulative hierarchies and computability over universes of sets
Domenico Cantone
2008-05-01
Full Text Available Various metamathematical investigations, beginning with Fraenkel’s historical proof of the independence of the axiom of choice, called for suitable deﬁnitions of hierarchical universes of sets. This led to the discovery of such important cumulative structures as the one singled out by von Neumann (generally taken as the universe of all sets and Godel’s universe of the so-called constructibles. Variants of those are exploited occasionally in studies concerning the foundations of analysis (according to Abraham Robinson’s approach, or concerning non-well-founded sets. We hence offer a systematic presentation of these many structures, partly motivated by their relevance and pervasiveness in mathematics. As we report, numerous properties of hierarchy-related notions such as rank, have been veriﬁed with the assistance of the ÆtnaNova proof-checker.Through SETL and Maple implementations of procedures which effectively handle the Ackermann’s hereditarily ﬁnite sets, we illustrate a particularly signiﬁcant case among those in which the entities which form a universe of sets can be algorithmically constructed and manipulated; hereby, the fruitful bearing on pure mathematics of cumulative set hierarchies ramiﬁes into the realms of theoretical computer science and algorithmics.
Data Sets, Ensemble Cloud Computing, and the University Library (Invited)
Plale, B. A.
2013-12-01
The environmental researcher at the public university has new resources at their disposal to aid in research and publishing. Cloud computing provides compute cycles on demand for analysis and modeling scenarios. Cloud computing is attractive for e-Science because of the ease with which cores can be accessed on demand, and because the virtual machine implementation that underlies cloud computing reduces the cost of porting a numeric or analysis code to a new platform. At the university, many libraries at larger universities are developing the e-Science skills to serve as repositories of record for publishable data sets. But these are confusing times for the publication of data sets from environmental research. The large publishers of scientific literature are advocating a process whereby data sets are tightly tied to a publication. In other words, a paper published in the scientific literature that gives results based on data, must have an associated data set accessible that backs up the results. This approach supports reproducibility of results in that publishers maintain a repository for the papers they publish, and the data sets that the papers used. Does such a solution that maps one data set (or subset) to one paper fit the needs of the environmental researcher who among other things uses complex models, mines longitudinal data bases, and generates observational results? The second school of thought has emerged out of NSF, NOAA, and NASA funded efforts over time: data sets exist coherent at a location, such as occurs at National Snow and Ice Data Center (NSIDC). But when a collection is coherent, reproducibility of individual results is more challenging. We argue for a third complementary option: the university repository as a location for data sets produced as a result of university-based research. This location for a repository relies on the expertise developing in the university libraries across the country, and leverages tools, such as are being developed
An Abstract Reachability Approach by Combining HOL Induction and Multiway Decision Graphs
Sa'ed Abed; Otmanc Ait Mohamed; Ghiath Al-Sammane
2009-01-01
In this paper, we provide a necessary infrastructure to define an abstract state exploration in the HOL theorem prover. Our infrastructure is based on a deep embedding of the Multiway Decision Graphs (MDGs) theory in HOL. MDGs generalize Reduced Ordered Binary Decision Diagrams (ROBDDs) to represent and manipulate a subset of first-order logic formulae. The MDGs embedding is based on the logical formulation of an MDG as Directed Formulae (DF). Then, the MDGs operations are defined and the correctness proof of each operation is provided. The MDG reachability algorithm is then defined as a conversion that uses our MDG theory within HOL. Finally, a set of experimentations over benchmark circuits has been conducted to ensure the applicability and to measure the performance of our approach.
Extensions of Clarke's proximal characterization for reachable mappings of differential inclusions
Donchev, T.; Dontchev, A. L.
2008-12-01
In this paper we show that Clarke's proximal characterization for reachable mappings of Lipschitz continuous differential inclusions is valid for a larger class of continuous and locally one-side Kamke continuous inclusions. We also give a new proximal characterization for reachable mappings of upper semi-continuous differential inclusions.
Kolmogorov complexities Kmax, Kmin on computable partially ordered sets
Ferbus-Zanda, Marie
2008-01-01
We introduce a machine free mathematical framework to get a natural formalization of some general notions of infinite computation in the context of Kolmogorov complexity. Namely, the classes Max^{X\\to D}_{PR} and Max^{X\\to D}_{Rec} of functions X \\to D which are pointwise maximum of partial or total computable sequences of functions where D = (D,_{ct} K^{0',D}. We characterize the orders leading to each case. We also show that K^D_{min}, K^D_{max} cannot be both much smaller than K^D at any point. These results are proved in a more general setting with two orders on D, one extending the other.
Computational Study on a PTAS for Planar Dominating Set Problem
Qian-Ping Gu
2013-01-01
Full Text Available The dominating set problem is a core NP-hard problem in combinatorial optimization and graph theory, and has many important applications. Baker [JACM 41,1994] introduces a k-outer planar graph decomposition-based framework for designing polynomial time approximation scheme (PTAS for a class of NP-hard problems in planar graphs. It is mentioned that the framework can be applied to obtain an O(2ckn time, c is a constant, (1+1/k-approximation algorithm for the planar dominating set problem. We show that the approximation ratio achieved by the mentioned application of the framework is not bounded by any constant for the planar dominating set problem. We modify the application of the framework to give a PTAS for the planar dominating set problem. With k-outer planar graph decompositions, the modified PTAS has an approximation ratio (1 + 2/k. Using 2k-outer planar graph decompositions, the modified PTAS achieves the approximation ratio (1+1/k in O(22ckn time. We report a computational study on the modified PTAS. Our results show that the modified PTAS is practical.
Parallel Computation of the Topology of Level Sets
Pascucci, V; Cole-McLaughlin, K
2004-12-16
This paper introduces two efficient algorithms that compute the Contour Tree of a 3D scalar field F and its augmented version with the Betti numbers of each isosurface. The Contour Tree is a fundamental data structure in scientific visualization that is used to preprocess the domain mesh to allow optimal computation of isosurfaces with minimal overhead storage. The Contour Tree can also be used to build user interfaces reporting the complete topological characterization of a scalar field, as shown in Figure 1. Data exploration time is reduced since the user understands the evolution of level set components with changing isovalue. The Augmented Contour Tree provides even more accurate information segmenting the range space of the scalar field in portion of invariant topology. The exploration time for a single isosurface is also improved since its genus is known in advance. Our first new algorithm augments any given Contour Tree with the Betti numbers of all possible corresponding isocontours in linear time with the size of the tree. Moreover we show how to extend the scheme introduced in [3] with the Betti number computation without increasing its complexity. Thus, we improve on the time complexity from our previous approach [10] from O(m log m) to O(n log n + m), where m is the number of cells and n is the number of vertices in the domain of F. Our second contribution is a new divide-and-conquer algorithm that computes the Augmented Contour Tree with improved efficiency. The approach computes the output Contour Tree by merging two intermediate Contour Trees and is independent of the interpolant. In this way we confine any knowledge regarding a specific interpolant to an independent function that computes the tree for a single cell. We have implemented this function for the trilinear interpolant and plan to replace it with higher order interpolants when needed. The time complexity is O(n + t log n), where t is the number of critical points of F. For the first time
A DNC function that computes no effectively bi-immune set
Beros, Achilles A.
2013-01-01
In Diagonally Non-Computable Functions and Bi-Immunity, Carl Jockusch and Andrew Lewis proved that every DNC function computes a bi-immune set. They asked whether every DNC function computes an effectively bi-immune set. We construct a DNC function that computes no effectively bi-immune set, thereby answering their question in the negative.
On divisible weighted Dynkin diagrams and reachable elements
Panyushev, Dmitri I
2010-01-01
Let D(e) denote the weighted Dynkin diagram of a nilpotent element $e$ in complex simple Lie algebra $\\g$. We say that D(e) is divisible if D(e)/2 is again a weighted Dynkin diagram. (That is, a necessary condition for divisibility is that $e$ is even.) The corresponding pair of nilpotent orbits is said to be friendly. In this note, we classify the friendly pairs and describe some of their properties. We also observe that any subalgebra sl(3) in $\\g$ determines a friendly pair. Such pairs are called A2-pairs. It turns out that the centraliser of the lower orbit in an A2-pair has some remarkable properties. Let $Gx$ be such an orbit and $h$ a characteristic of $x$. Then $h$ determines the Z-grading of the centraliser $z=z(x)$. We prove that $z$ is generated by the Levi subalgebra $z(0)$ and two elements in $z(1)$. In particular, (1) the nilpotent radical of $z$ is generated by $z(1)$ and (2) $x\\in [z,z]$. The nilpotent elements having the last property are called reachable.
Liveness and Reachability Analysis of BPMN Process Models
Anass Rachdi
2016-06-01
Full Text Available Business processes are usually defined by business experts who require intuitive and informal graphical notations such as BPMN (Business Process Management Notation for documenting and communicating their organization activities and behavior. However, BPMN has not been provided with a formal semantics, which limits the analysis of BPMN models to using solely informal techniques such as simulation. In order to address this limitation and use formal verification, it is necessary to define a certain “mapping” between BPMN and a formal language such as Concurrent Sequential Processes (CSP and Petri Nets (PN. This paper proposes a method for the verification of BPMN models by defining formal semantics of BPMN in terms of a mapping to Time Petri Nets (TPN, which are equipped with very efficient analytical techniques. After the translation of BPMN models to TPN, verification is done to ensure that some functional properties are satisfied by the model under investigation, namely liveness and reachability properties. The main advantage of our approach over existing ones is that it takes into account the time components in modeling Business process models. An example is used throughout the paper to illustrate the proposed method.
Mobile device-to-device distributed computing using data sets
Remédios, Diogo; Teófilo, António; Paulino, Hervé; Lourenço, João
2015-01-01
The rapidly increasing computing power, available storage and communication capabilities of mobile devices makes it possible to start processing and storing data locally, rather than offloading it to remote servers; allowing scenarios of mobile clouds without infrastructure dependency. We can now aim at connecting neighboring mobile devices, creating a local mobile cloud that provides storage and computing services on local generated data. In this paper, we describe an early overview of a dis...
Computation of the Metric Average of 2D Sets with Piecewise Linear Boundaries
Shay Kels
2010-07-01
Full Text Available The metric average is a binary operation between sets in Rn which is used in the approximation of set-valued functions. We introduce an algorithm that applies tools of computational geometry to the computation of the metric average of 2D sets with piecewise linear boundaries.
Eric Psota
2010-01-01
Full Text Available The error mechanisms of iterative message-passing decoders for low-density parity-check codes are studied. A tutorial review is given of the various graphical structures, including trapping sets, stopping sets, and absorbing sets that are frequently used to characterize the errors observed in simulations of iterative decoding of low-density parity-check codes. The connections between trapping sets and deviations on computation trees are explored in depth using the notion of problematic trapping sets in order to bridge the experimental and analytic approaches to these error mechanisms. A new iterative algorithm for finding low-weight problematic trapping sets is presented and shown to be capable of identifying many trapping sets that are frequently observed during iterative decoding of low-density parity-check codes on the additive white Gaussian noise channel. Finally, a new method is given for characterizing the weight of deviations that result from problematic trapping sets.
High Interactivity Visualization Software for Large Computational Data Sets Project
National Aeronautics and Space Administration — Existing scientific visualization tools have specific limitations for large scale scientific data sets. Of these four limitations can be seen as paramount: (i)...
Defining Effectiveness Using Finite Sets A Study on Computability
Macedo, Hugo Daniel dos Santos; Haeusler, Edward H.; Garcia, Alex
2016-01-01
This paper studies effectiveness in the domain of computability. In the context of model-theoretical approaches to effectiveness, where a function is considered effective if there is a model containing a representation of such function, our definition relies on a model provided by functions betwe...
A Memory and Computation Efficient Sparse Level-Set Method
Laan, Wladimir J. van der; Jalba, Andrei C.; Roerdink, Jos B.T.M.
2011-01-01
Since its introduction, the level set method has become the favorite technique for capturing and tracking moving interfaces, and found applications in a wide variety of scientific fields. In this paper we present efficient data structures and algorithms for tracking dynamic interfaces through the le
A Memory and Computation Efficient Sparse Level-Set Method
Laan, Wladimir J. van der; Jalba, Andrei C.; Roerdink, Jos B.T.M.
Since its introduction, the level set method has become the favorite technique for capturing and tracking moving interfaces, and found applications in a wide variety of scientific fields. In this paper we present efficient data structures and algorithms for tracking dynamic interfaces through the
Computing autocatalytic sets to unravel inconsistencies in metabolic network reconstructions
Schmidt, R.; Waschina, S.; Boettger-Schmidt, D.
2015-01-01
by inherent inconsistencies and gaps. RESULTS: Here we present a novel method to validate metabolic network reconstructions based on the concept of autocatalytic sets. Autocatalytic sets correspond to collections of metabolites that, besides enzymes and a growth medium, are required to produce all biomass......MOTIVATION: Genome-scale metabolic network reconstructions have been established as a powerful tool for the prediction of cellular phenotypes and metabolic capabilities of organisms. In recent years, the number of network reconstructions has been constantly increasing, mostly because...... of the availability of novel (semi-)automated procedures, which enabled the reconstruction of metabolic models based on individual genomes and their annotation. The resulting models are widely used in numerous applications. However, the accuracy and predictive power of network reconstructions are commonly limited...
Computing Preferred Extensions for Argumentation Systems with Sets of Attacking
Nielsen, Søren Holbech; Parsons, Simon
2006-01-01
The hitherto most abstract, and hence general, argumentation system, is the one described by Dung in a paper from 1995. This framework does not allow for joint attacks on arguments, but in a recent paper we adapted it to support such attacks, and proved that this adapted framework enjoyed the same...... formal properties as that of Dung. One problem posed by Dung's original framework, which was neglected for some time, is how to compute preferred extensions of the argumentation systems. However, in 2001, in a paper by Doutre and Mengin, a procedure was given for enumerating preferred extensions...
A Joint Criterion for Reachability and Observability of Nonuniformly Sampled Discrete Systems
Fúster-Sabater, Amparo
2010-01-01
A joint characterization of reachability (controllability) and observability (constructibility) for linear SISO nonuniformly sampled discrete systems is presented. The work generalizes to the nonuniform sampling the criterion known for the uniform sampling. Emphasis is on the nonuniform sampling sequence, which is believed to be an additional element for analysis and handling of discrete systems.
On the sighting of unicorns: A variational approach to computing invariant sets in dynamical systems
Junge, Oliver; Kevrekidis, Ioannis G.
2017-06-01
We propose to compute approximations to invariant sets in dynamical systems by minimizing an appropriate distance between a suitably selected finite set of points and its image under the dynamics. We demonstrate, through computational experiments, that this approach can successfully converge to approximations of (maximal) invariant sets of arbitrary topology, dimension, and stability, such as, e.g., saddle type invariant sets with complicated dynamics. We further propose to extend this approach by adding a Lennard-Jones type potential term to the objective function, which yields more evenly distributed approximating finite point sets, and illustrate the procedure through corresponding numerical experiments.
Support-Intuitionistic Fuzzy Set: A New Concept for Soft Computing
Xuan Thao Nguyen
2015-03-01
Full Text Available Today, soft computing is a field that is used a lot in solving real-world problems, such as problems in economics, finance, banking... With the aim to serve for solving the real problem, many new theories and/or tools which were proposed, improved to help soft computing used more efficiently. We can mention some theories as fuzzy sets theory (L. Zadeh, 1965, intuitionistic fuzzy set (K Atanasov, 1986. In this paper, we introduce a new notion of support-intuitionistic fuzzy (SIF set, which is the combination a intuitionistic fuzzy set with a fuzzy set. So, SIF set is a directly extension of fuzzy set and intuitionistic fuzzy sets (Atanassov. Then, we define some operators on support-intuitionistic fuzzy sets, and investigate some properties of these operators.
Granularity of Knowledge Computed by Genetic Algorithms Based on Rough Sets Theory
Wenyuan Yang; Xiaoping Ye; Yong Tang; Pingping Wei
2006-01-01
Rough set philosophy hinges on the granularity of data, which is used to build all its basic concepts, like approximations, dependencies, reduction etc. Genetic Algorithms provides a general frame to optimize problem solution of complex system without depending on the domain of problem. It is robust to many kinds of problems. The paper combines Genetic Algorithms and rough sets theory to compute granular of knowledge through an example of information table. The combination enable us to compute granular of knowledge effectively. It is also useful for computer auto-computing and information processing.
A secure multi-party computation solution to intersection problems of sets and rectangles
LI Shundong; DAI Yiqi; WANG Daoshun; LUO Ping
2006-01-01
Secure multi-party computation (SMC) is a research focus in international cryptographic community. At present, there is no SMC solution to the intersection problem of sets. In this paper, we first propose a SMC solution to this problem. Applying Cantor encoding method to computational geometry problems, and based on the solution to set-intersection problem, we further propose solutions to points inclusion problem and intersection problem of rectangles and further prove their privacy-preserving property with widely accepted simulation paradigm. Compared with the known solutions, these new solutions are of less computational complexity and less communication complexity, and have obvious superiority in computational and communication complexity.
Which Setting to Choose: Comparison of Whole-Class vs. Small-Group Computer Simulation Use
Smetana, Lara K.; Bell, Randy L.
2014-01-01
Studies considering whole-class use of computer simulations are limited, despite the increasing interest in this mode of use. The current study explored how a collection of computer simulations was integrated into both whole-class and small-group instructional settings during a high school chemistry unit on atomic structure. Participants included…
Robust fault detection of linear systems using a computationally efficient set-membership method
Tabatabaeipour, Mojtaba; Bak, Thomas
2014-01-01
In this paper, a computationally efficient set-membership method for robust fault detection of linear systems is proposed. The method computes an interval outer-approximation of the output of the system that is consistent with the model, the bounds on noise and disturbance, and the past...
King, James M.; And Others
The materials described here represent the conversion of a highly popular student workbook "Sets, Probability and Statistics: The Mathematics of Life Insurance" into a computer program. The program is designed to familiarize students with the concepts of sets, probability, and statistics, and to provide practice using real life examples. It also…
Fang Gensun; Ye Peixin
2005-01-01
The order of computational complexity of all bounded linear functional approximation problem is determined for the generalized Sobolev class Wp∧(Id), Nikolskii class Hk∞(Id) in the worst (deterministic), stochastic and average case setting, from which it is concluded that the bounded linear functional approximation problem for the classes stochastic and average case setting.
Evolution of Autocatalytic Sets in Computational Models of Chemical Reaction Networks
Hordijk, Wim
2016-06-01
Several computational models of chemical reaction networks have been presented in the literature in the past, showing the appearance and (potential) evolution of autocatalytic sets. However, the notion of autocatalytic sets has been defined differently in different modeling contexts, each one having some shortcoming or limitation. Here, we review four such models and definitions, and then formally describe and analyze them in the context of a mathematical framework for studying autocatalytic sets known as RAF theory. The main results are that: (1) RAF theory can capture the various previous definitions of autocatalytic sets and is therefore more complete and general, (2) the formal framework can be used to efficiently detect and analyze autocatalytic sets in all of these different computational models, (3) autocatalytic (RAF) sets are indeed likely to appear and evolve in such models, and (4) this could have important implications for a possible metabolism-first scenario for the origin of life.
Evolution of Autocatalytic Sets in Computational Models of Chemical Reaction Networks.
Hordijk, Wim
2016-06-01
Several computational models of chemical reaction networks have been presented in the literature in the past, showing the appearance and (potential) evolution of autocatalytic sets. However, the notion of autocatalytic sets has been defined differently in different modeling contexts, each one having some shortcoming or limitation. Here, we review four such models and definitions, and then formally describe and analyze them in the context of a mathematical framework for studying autocatalytic sets known as RAF theory. The main results are that: (1) RAF theory can capture the various previous definitions of autocatalytic sets and is therefore more complete and general, (2) the formal framework can be used to efficiently detect and analyze autocatalytic sets in all of these different computational models, (3) autocatalytic (RAF) sets are indeed likely to appear and evolve in such models, and (4) this could have important implications for a possible metabolism-first scenario for the origin of life.
Computing steerable principal components of a large set of images and their rotations.
Ponce, Colin; Singer, Amit
2011-11-01
We present here an efficient algorithm to compute the Principal Component Analysis (PCA) of a large image set consisting of images and, for each image, the set of its uniform rotations in the plane. We do this by pointing out the block circulant structure of the covariance matrix and utilizing that structure to compute its eigenvectors. We also demonstrate the advantages of this algorithm over similar ones with numerical experiments. Although it is useful in many settings, we illustrate the specific application of the algorithm to the problem of cryo-electron microscopy.
Impact Of Various Factors On Probability Of Reachability In Manet: A Survey
Chander Kuma
2011-10-01
Full Text Available The Probability of Reachability (POR is defined as fraction of possible reachable routes to all possible routes between all different sources to all different destinations. In network like Mobile Ad-hoc Network (MANET adequate level of POR is desirable for its smooth functioning. Its value depends upon various factors such as Transmission Range (T, Number of Nodes (N, node mobility, channel fading, shape and size of the region where the ad-hoc network is to be deployed. To find the impact of N,T, size and shape on the value of POR, a shortest path routing algorithm was implemented in MATLAB and effect of the above said parameters was studied. We observe significant impact of varying not only N and T but also of varying size and shape of the region on the POR values.
The RISC (Reduced Instruction Set Computer) Architecture and Computer Performance Evaluation.
1986-03-01
8 )1; II;-21(82 ? 1JV , *~A 1*-e eo I Q .f ’ . - . - . .> - Approved for public release; distribution is unlimited. The RISC Architecture and Computer...1000 Lisboa Portugal 6. Manuel Pedrosa de Barros 4 Celula 5 Bloco 5 Lote D, 3 Direito 2795 Linda-a-Velha Portugal t~m " 96" ..... ...... |f
Delta-Complete Analysis for Bounded Reachability of Hybrid Systems
2014-07-16
can occur in realistic hybrid sys- tems, such as polynomials, trigonometric functions , and solutions of Lipschitz-continuous ODEs. The goal of this...systems are Type 2 computable, such as polynomials, exponentiation, logarithm, trigonometric functions , and solution functions of Lipschitz-continuous...comes from the need of solving logic formulas over the real numbers with nonlinear functions , which is notoriously hard. Recently, we have defined the δ
Approximating the Value of a Concurrent Reachability Game in the Polynomial Time Hierarchy
Frederiksen, Søren Kristoffer Stiil; Miltersen, Peter Bro
2013-01-01
We show that the value of a finite-state concurrent reachability game can be approximated to arbitrary precision in TFNP[NP], that is, in the polynomial time hierarchy. Previously, no better bound than PSPACE was known for this problem. The proof is based on formulating a variant of the state red...... reduction algorithm for Markov chains using arbitrary precision floating point arithmetic and giving a rigorous error analysis of the algorithm....
Theory and computation of disturbance invariant sets for discrete-time linear systems
Kolmanovsky Ilya
1998-01-01
Full Text Available This paper considers the characterization and computation of invariant sets for discrete-time, time-invariant, linear systems with disturbance inputs whose values are confined to a specified compact set but are otherwise unknown. The emphasis is on determining maximal disturbance-invariant sets X that belong to a specified subset Γ of the state space. Such d-invariant sets have important applications in control problems where there are pointwise-in-time state constraints of the form χ ( t ∈ Γ . One purpose of the paper is to unite and extend in a rigorous way disparate results from the prior literature. In addition there are entirely new results. Specific contributions include: exploitation of the Pontryagin set difference to clarify conceptual matters and simplify mathematical developments, special properties of maximal invariant sets and conditions for their finite determination, algorithms for generating concrete representations of maximal invariant sets, practical computational questions, extension of the main results to general Lyapunov stable systems, applications of the computational techniques to the bounding of state and output response. Results on Lyapunov stable systems are applied to the implementation of a logic-based, nonlinear multimode regulator. For plants with disturbance inputs and state-control constraints it enlarges the constraint-admissible domain of attraction. Numerical examples illustrate the various theoretical and computational results.
Model Predictive Control considering Reachable Range of Wheels for Leg / Wheel Mobile Robots
Suzuki, Naito; Nonaka, Kenichiro; Sekiguchi, Kazuma
2016-09-01
Obstacle avoidance is one of the important tasks for mobile robots. In this paper, we study obstacle avoidance control for mobile robots equipped with four legs comprised of three DoF SCARA leg/wheel mechanism, which enables the robot to change its shape adapting to environments. Our previous method achieves obstacle avoidance by model predictive control (MPC) considering obstacle size and lateral wheel positions. However, this method does not ensure existence of joint angles which achieves reference wheel positions calculated by MPC. In this study, we propose a model predictive control considering reachable mobile ranges of wheels positions by combining multiple linear constraints, where each reachable mobile range is approximated as a convex trapezoid. Thus, we achieve to formulate a MPC as a quadratic problem with linear constraints for nonlinear problem of longitudinal and lateral wheel position control. By optimization of MPC, the reference wheel positions are calculated, while each joint angle is determined by inverse kinematics. Considering reachable mobile ranges explicitly, the optimal joint angles are calculated, which enables wheels to reach the reference wheel positions. We verify its advantages by comparing the proposed method with the previous method through numerical simulations.
Longitudinal evaluation of upper extremity reachable workspace in ALS by Kinect sensor.
de Bie, Evan; Oskarsson, Bjorn; Joyce, Nanette C; Nicorici, Alina; Kurillo, Gregorij; Han, Jay J
2017-02-01
Our objective was to evaluate longitudinal changes in Microsoft Kinect measured upper extremity reachable workspace relative surface area (RSA) versus the revised Amyotrophic Lateral Sclerosis Functional Rating Scale (ALSFRS-R), ALSFRS-R upper extremity sub-scale and Forced Vital Capacity (FVC) in a cohort of patients diagnosed with amyotrophic lateral sclerosis (ALS). Ten patients diagnosed with ALS (ages 52-76 years, ALSFRS-R: 8-41 at entry) were tested using single 3D depth sensor, Microsoft Kinect, to measure reachable workspace RSA across five visits spanning one year. Changes in RSA, ALSFRS-R, ALSFRS-R upper extremity sub-scale, and FVC were assessed using a linear mixed model. Results showed that upper lateral quadrant RSA declined significantly in one year by approximately 19% (p <0.01) while all other quadrants and total RSA did not change significantly in this time-period. Simultaneously, ALSFRS-R upper extremity sub-scale worsened significantly by 25% (p <0.01). In conclusion, upper extremity reachable workspace RSA as a novel ALS outcome measure is capable of objectively quantifying declines in upper extremity ability over time in patients with ALS with more granularity than other common outcome measures. RSA may serve as a clinical endpoint for the evaluation of upper extremity targeted therapeutics.
Controllability, Observability, Reachability, and Stability of Dynamic Linear Systems
Jackson, Billy J; Gravagne, Ian A; Marks, Robert J
2009-01-01
We develop a linear systems theory that coincides with the existing theories for continuous and discrete dynamical systems, but that also extends to linear systems defined on nonuniform time domains. The approach here is based on generalized Laplace transform methods (e.g. shifts and convolution) from our recent work \\cite{DaGrJaMaRa}. We study controllability in terms of the controllability Gramian and various rank conditions (including Kalman's) in both the time invariant and time varying settings and compare the results. We also explore observability in terms of both Gramian and rank conditions as well as realizability results. We conclude by applying this systems theory to connect exponential and BIBO stability problems in this general setting. Numerous examples are included to show the utility of these results.
Secure multi-party computation solution to Yao's millionaires' problem based on set-inclusion
LI Shundong; DAI Yiqi; YOU Qiyou
2005-01-01
Secure multi-party computation is a focus of international cryptography in recent years. Protocols for Yao's millionaires' problem have become an important building block of many secure multi-party computation protocols. Their efficiency are crucial to the efficiency of many secure multi-party computation protocols. Unfortunately, known protocols for Yao's millionaires' problem have high computational complexity or communication complexity. In this study, based on the 1-out-of-m oblivious transfer and set-inclusion problem, we propose a new protocol to solve this problem. This new protocol is very efficient in terms of both computational and communication complexities. Its privacy-preserving property is also proved by simulation paradigm which is generally accepted in the study of secure multi-party computation. We also compare the information leakage of our new protocol and the known protocols.
A method of combining SE-tree to compute all minimal hitting sets
无
2006-01-01
In model-based diagnosis, the candidate diagnostic results are generally characterized by all minimal hitting sets for the collection of all conflict sets. In this paper, a new method is proposed to judge a hitting set by the number of conflict sets corresponding to components, and the computing procedure is formalized by combining revised SE-tree (set enumeration tree) with dosed nodes to generate all minimal hitting sets. Results show that because closed nodes are added into SE-tree, the search efficiency is highly improved. Furthermore, the proposed method is easy to be understood and implemented. Compared with other effective algorithms with completeness in some experimental tests, the diagnosis efficiency of our proposed method is higher, particularly for single- and double-fault diagnosis.
Level set discrete element method for three-dimensional computations with triaxial case study
Kawamoto, Reid; Andò, Edward; Viggiani, Gioacchino; Andrade, José E.
2016-06-01
In this paper, we outline the level set discrete element method (LS-DEM) which is a discrete element method variant able to simulate systems of particles with arbitrary shape using level set functions as a geometric basis. This unique formulation allows seamless interfacing with level set-based characterization methods as well as computational ease in contact calculations. We then apply LS-DEM to simulate two virtual triaxial specimens generated from XRCT images of experiments and demonstrate LS-DEM's ability to quantitatively capture and predict stress-strain and volume-strain behavior observed in the experiments.
Fast computation of categorical richness on raster data sets and related problems
de Berg, Mark; Tsirogiannis, Constantinos; Wilkinson, Bryan T.
2015-01-01
In many scientific fields, it is common to encounter raster data sets consisting of categorical data, such as soil type or land usage of a terrain. A problem that arises in the presence of such data is the following: given a raster G of n cells storing categorical data, compute for every cell c...
Policy Analysis: A Tool for Setting District Computer Use Policy. Paper and Report Series No. 97.
Gray, Peter J.
This report explores the use of policy analysis as a tool for setting computer use policy in a school district by discussing the steps in the policy formation and implementation processes and outlining how policy analysis methods can contribute to the creation of effective policy. Factors related to the adoption and implementation of innovations…
Fast Deterministic Distributed Maximal Independent Set Computation on Growth-Bounded Graphs
Kuhn, Fabian; Moscibroda, Thomas; Nieberg, Tim; Wattenhofer, Roger; Fraigniaud, Pierre
2005-01-01
The distributed complexity of computing a maximal independent set in a graph is of both practical and theoretical importance. While there exists an elegant O(log n) time randomized algorithm for general graphs, no deterministic polylogarithmic algorithm is known. In this paper, we study the problem
Iris, Cagatay; Pacino, Dario; Røpke, Stefan;
2015-01-01
. To improve the performance of the set partitioning formulations, a number of variable reduction techniques are proposed. Furthermore, we analyze the effects of different discretization schemes and the impact of using a time-variant/invariant quay crane allocation policy. Computational experiments show...
A fast method for computing the centroid of a type-2 fuzzy set.
Wu, Hsin-Jung; Su, Yao-Lung; Lee, Shie-Jue
2012-06-01
Type reduction does the work of computing the centroid of a type-2 fuzzy set. The result is a type-1 fuzzy set from which a corresponding crisp number can then be obtained through defuzzification. Type reduction is one of the major operations involved in type-2 fuzzy inference. Therefore, making type reduction efficient is a significant task in the application of type-2 fuzzy systems. Liu introduced a horizontal slice representation, called the α-plane representation, and proposed a type-reduction method for a type-2 fuzzy set. By exploring some useful properties of the α-plane representation and of the type reduction for interval type-2 fuzzy sets, a fast method is developed for computing the centroid of a type-2 fuzzy set. The number of computations and comparisons involved is greatly reduced. Convergence in each iteration can then speed up, and type reduction can be done much more efficiently. The effectiveness of the proposed method is analyzed mathematically and demonstrated by experimental results.
Liu, Hongbing; Liu, Chunhua; Wu, Chang-an
2014-01-01
Granular computing classification algorithms are proposed based on distance measures between two granules from the view of set. Firstly, granules are represented as the forms of hyperdiamond, hypersphere, hypercube, and hyperbox. Secondly, the distance measure between two granules is defined from the view of set, and the union operator between two granules is formed to obtain the granule set including the granules with different granularity. Thirdly the threshold of granularity determines the union between two granules and is used to form the granular computing classification algorithms based on distance measures (DGrC). The benchmark datasets in UCI Machine Learning Repository are used to verify the performance of DGrC, and experimental results show that DGrC improved the testing accuracies.
A Survey of Reachability Trees of Unbounded Petri Nets%无界Petri网的可达树的综述
干梦迪; 王寿光; 周孟初; 李俊; 李月
2015-01-01
Petri 网自提出以来得到了学术界和工业界的广泛关注。 Petri 网系统的可达性是最基本性质之一。系统的其他相关性质都可以通过可达性进行分析。利用等价的有限可达树来研究无界Petri 网可达性，依然是一个开放性问题。该研究可以追溯到40年前，但由于问题本身的复杂性和难度太大，直到最近20年，经过国内外诸多学者的不懈努力，才逐渐取得了一些阶段性的成果和部分突破。本文回顾了近40年来国内外学者为彻底解决该问题作出的贡献。重点对4种开创性的研究成果展开讨论，分别为有限可达树、扩展可达树、改进可达树及新型改进可达树。探讨了今后无界Petri网可达性问题的研究方向。%In recent years both industry and academia have paid much attention to the theory and applications of Petri nets. Reachability is a basic property of a Petri net, and many properties can be analyzed via it. However, analyzing the reachability problem of unbounded Petri nets by finite reachability trees has been an open problem since the inception of Petri nets. Researchers began to study the problem of reachability trees over 40 years ago. However, they made only limited progress over the last 20 years due to its complexity and diﬃculty. We present an overview of some important contributions toward its solution. The focuses are on four novel finite reachability trees: finite reachability tree (FRT), augmented reachability tree (ART), modified reachability tree (MRT) and new modified reachailbity tree (NMRT). The paper concludes with a discussion of directions for future research of the reachability problem of unbounded Petri nets.
A Forward Reachability Algorithm for Bounded Timed-Arc Petri Nets
David, Alexandre; Jacobsen, Lasse; Jacobsen, Morten
2012-01-01
in the presence of monotonicity-breaking features like age invariants and inhibitor arcs. We implement the algorithm within the model-checkerTAPAAL and the experimental results document an encouraging performance compared to verification approaches that translate TAPN models to UPPAAL timed automata.......Timed-arc Petri nets (TAPN) are a well-known time extension of thePetri net model and several translations to networks of timedautomata have been proposed for this model.We present a direct, DBM-basedalgorithm for forward reachability analysis of bounded TAPNs extended with transport arcs...
Snowden, Jonathan M.; Rose, Sherri; Mortimer, Kathleen M.
2011-01-01
The growing body of work in the epidemiology literature focused on G-computation includes theoretical explanations of the method but very few simulations or examples of application. The small number of G-computation analyses in the epidemiology literature relative to other causal inference approaches may be partially due to a lack of didactic explanations of the method targeted toward an epidemiology audience. The authors provide a step-by-step demonstration of G-computation that is intended to familiarize the reader with this procedure. The authors simulate a data set and then demonstrate both G-computation and traditional regression to draw connections and illustrate contrasts between their implementation and interpretation relative to the truth of the simulation protocol. A marginal structural model is used for effect estimation in the G-computation example. The authors conclude by answering a series of questions to emphasize the key characteristics of causal inference techniques and the G-computation procedure in particular. PMID:21415029
An Explicit Universal Gate-set for Exchange-Only Quantum Computation
Hsieh, M; Myrgren, S; Whaley, K B
2003-01-01
A single physical interaction might not be universal for quantum computation in general. It has been shown, however, that in some cases it can generate universal quantum computation over a subspace. For example, by encoding logical qubits into arrays of multiple physical qubits, a single isotropic or anisotropic exchange interaction can generate a universal logical gate-set. Recently, encoded universality for the exchange interaction was explicitly demonstrated on three-qubit arrays, the smallest nontrivial encoding. We now present the exact specification of a discrete universal logical gate-set on four-qubit arrays. We show how to implement the single qubit operations exactly with at most 3 nearest neighbor exchange operations and how to generate the encoded controlled-not with 29 parallel nearest neighbor exchange interactions or 54 serial gates, obtained from extensive numerical optimization using genetic algorithms and Nelder-Mead searches. Our gate-sequences are immediately applicable to implementations ...
A comparison between computer-controlled and set work rate exercise based on target heart rate
Pratt, Wanda M.; Siconolfi, Steven F.; Webster, Laurie; Hayes, Judith C.; Mazzocca, Augustus D.; Harris, Bernard A., Jr.
1991-01-01
Two methods are compared for observing the heart rate (HR), metabolic equivalents, and time in target HR zone (defined as the target HR + or - 5 bpm) during 20 min of exercise at a prescribed intensity of the maximum working capacity. In one method, called set-work rate exercise, the information from a graded exercise test is used to select a target HR and to calculate a corresponding constant work rate that should induce the desired HR. In the other method, the work rate is controlled by a computer algorithm to achieve and maintain a prescribed target HR. It is shown that computer-controlled exercise is an effective alternative to the traditional set work rate exercise, particularly when tight control of cardiovascular responses is necessary.
EVOLVE : a Bridge between Probability, Set Oriented Numerics and Evolutionary Computation
Tantar, Alexandru-Adrian; Bouvry, Pascal; Moral, Pierre; Legrand, Pierrick; Coello, Carlos; Schütze, Oliver; EVOLVE 2011
2013-01-01
The aim of this book is to provide a strong theoretical support for understanding and analyzing the behavior of evolutionary algorithms, as well as for creating a bridge between probability, set-oriented numerics and evolutionary computation. The volume encloses a collection of contributions that were presented at the EVOLVE 2011 international workshop, held in Luxembourg, May 25-27, 2011, coming from invited speakers and also from selected regular submissions. The aim of EVOLVE is to unify the perspectives offered by probability, set oriented numerics and evolutionary computation. EVOLVE focuses on challenging aspects that arise at the passage from theory to new paradigms and practice, elaborating on the foundations of evolutionary algorithms and theory-inspired methods merged with cutting-edge techniques that ensure performance guarantee factors. EVOLVE is also intended to foster a growing interest for robust and efficient methods with a sound theoretical background. The chapters enclose challenging theoret...
Iachini, Tina; Ruggiero, Gennaro; Ruotolo, Francesco; Schiano di Cola, Armando; Senese, Vincenzo Paolo
2015-09-01
Although the effects of several personality factors on interpersonal space (i.e. social space within personal comfort area) are well documented, it is not clear whether they also extend to peripersonal space (i.e. reaching space). Indeed, no study has directly compared these spaces in relation to personality and anxiety factors even though such a comparison would help to clarify to what extent they share similar mechanisms and characteristics. The aim of the present paper was to investigate whether personality dimensions and anxiety levels are associated with reaching and comfort distances. Seventy university students (35 females) were administered the Big Five Questionnaire and the State-Trait Anxiety Inventory; afterwards, they had to provide reachability- and comfort-distance judgments towards human confederates while standing still (passive) or walking towards them (active). The correlation analyses showed that both spaces were positively related to anxiety and negatively correlated with the Dynamism in the active condition. Moreover, in the passive condition higher Emotional Stability was related to shorter comfort distance, while higher cognitive Openness was associated with shorter reachability distance. The implications of these results are discussed.
Atul Sharma
2015-05-01
Functions and conservation as well as subsidiary equations in Level Set Method (LSM) are presented. After the mathematical formulation, improvements in the numerical methodology for LSM are reviewed here for advection schemes, reinitialization methods, hybrid methods, adaptive-grid LSM, dual-resolution LSM, sharp-interface LSM, conservative LSM, parallel computing and extension from two to multi fluid/phase as well as to various types of two-phase flow. In the second part of this article, LSM method based Computational Multi-Fluid Dynamics (CMFD) applications and analysis are reviewed for four different types of multi-phase flow: separated and parallel internal flow, drop/bubble dynamics during jet break-up, drop impact dynamics on a solid or liquid surface and boiling. In the last twenty years, LSM has established itself as a method which is easy to program and is accurate as well as computationally-efficient.
3D-CT vascular setting protocol using computer graphics for the evaluation of maxillofacial lesions
CAVALCANTI Marcelo de Gusmão Paraiso
2001-01-01
Full Text Available In this paper we present the aspect of a mandibular giant cell granuloma in spiral computed tomography-based three-dimensional (3D-CT reconstructed images using computer graphics, and demonstrate the importance of the vascular protocol in permitting better diagnosis, visualization and determination of the dimensions of the lesion. We analyzed 21 patients with maxillofacial lesions of neoplastic and proliferative origins. Two oral and maxillofacial radiologists analyzed the images. The usefulness of interactive 3D images reconstructed by means of computer graphics, especially using a vascular setting protocol for qualitative and quantitative analyses for the diagnosis, determination of the extent of lesions, treatment planning and follow-up, was demonstrated. The technique is an important adjunct to the evaluation of lesions in relation to axial CT slices and 3D-CT bone images.
Evolving Non-Dominated Parameter Sets for Computational Models from Multiple Experiments
Lane, Peter C. R.; Gobet, Fernand
2013-03-01
Creating robust, reproducible and optimal computational models is a key challenge for theorists in many sciences. Psychology and cognitive science face particular challenges as large amounts of data are collected and many models are not amenable to analytical techniques for calculating parameter sets. Particular problems are to locate the full range of acceptable model parameters for a given dataset, and to confirm the consistency of model parameters across different datasets. Resolving these problems will provide a better understanding of the behaviour of computational models, and so support the development of general and robust models. In this article, we address these problems using evolutionary algorithms to develop parameters for computational models against multiple sets of experimental data; in particular, we propose the `speciated non-dominated sorting genetic algorithm' for evolving models in several theories. We discuss the problem of developing a model of categorisation using twenty-nine sets of data and models drawn from four different theories. We find that the evolutionary algorithms generate high quality models, adapted to provide a good fit to all available data.
Craske, Michelle G; Rose, Raphael D; Lang, Ariel; Welch, Stacy Shaw; Campbell-Sills, Laura; Sullivan, Greer; Sherbourne, Cathy; Bystritsky, Alexander; Stein, Murray B; Roy-Byrne, Peter P
2009-01-01
This article describes a computer-assisted cognitive behavioral therapy (CBT) program designed to support the delivery of evidenced-based CBT for the four most commonly occurring anxiety disorders (panic disorder, posttraumatic stress disorder, generalized anxiety disorder, and social anxiety disorder) in primary-care settings. The purpose of the current report is to (1) present the structure and format of the computer-assisted CBT program, and (2) to present evidence for acceptance of the program by clinicians and the effectiveness of the program for patients. Thirteen clinicians using the computer-assisted CBT program with patients in our ongoing Coordinated Anxiety Learning and Management study provided Likert-scale ratings and open-ended responses about the program. Rating scale data from 261 patients who completed at least one CBT session were also collected. Overall, the program was highly rated and modally described as very helpful. Results indicate that the patients fully participated (i.e., attendance and homework compliance), understood the program material, and acquired CBT skills. In addition, significant and substantial improvements occurred to the same degree in randomly audited subsets of each of the four primary anxiety disorders (N=74), in terms of self ratings of anxiety, depression, and expectations for improvement. Computer-assisted CBT programs provide a practice-based system for disseminating evidence-based mental health treatment in primary-care settings while maintaining treatment fidelity, even in the hands of novice clinicians. (c) 2009 Wiley-Liss, Inc.
Accurate Computation of Periodic Regions' Centers in the General M-Set with Integer Index Number
Wang Xingyuan
2010-01-01
Full Text Available This paper presents two methods for accurately computing the periodic regions' centers. One method fits for the general M-sets with integer index number, the other fits for the general M-sets with negative integer index number. Both methods improve the precision of computation by transforming the polynomial equations which determine the periodic regions' centers. We primarily discuss the general M-sets with negative integer index, and analyze the relationship between the number of periodic regions' centers on the principal symmetric axis and in the principal symmetric interior. We can get the centers' coordinates with at least 48 significant digits after the decimal point in both real and imaginary parts by applying the Newton's method to the transformed polynomial equation which determine the periodic regions' centers. In this paper, we list some centers' coordinates of general M-sets' k-periodic regions (k=3,4,5,6 for the index numbers α=−25,−24,…,−1 , all of which have highly numerical accuracy.
Garde, Sebastian; Hovenga, Evelyn; Buck, Jasmin; Knaup, Petra
2006-01-01
Ubiquitous computing requires ubiquitous access to information and knowledge. With the release of openEHR Version 1.0 there is a common model available to solve some of the problems related to accessing information and knowledge by improving semantic interoperability between clinical systems. Considerable work has been undertaken by various bodies to standardise Clinical Data Sets. Notwithstanding their value, several problems remain unsolved with Clinical Data Sets without the use of a common model underpinning them. This paper outlines these problems like incompatible basic data types and overlapping and incompatible definitions of clinical content. A solution to this based on openEHR archetypes is motivated and an approach to transform existing Clinical Data Sets into archetypes is presented. To avoid significant overlaps and unnecessary effort during archetype development, archetype development needs to be coordinated nationwide and beyond and also across the various health professions in a formalized process.
Process Auxiliary Decision-making Based on Rough Sets and Regulation Distance Computing
FANG Hui; TAN Jianrong; YIN Guofu; LI Zhongkai
2009-01-01
Computer aided process planning(CAPP) is an important content of computer integrated manufacturing, and intelligentizing is the orientation of development of CAPP. Process planning has characters of empirical and time-consuming to finalize, and the same technical aim always can be achieved by different process schemes, so intelligentizing of process decision making always be a difficult point of CAPP and computer integrated manufacturing (CIM). For the purpose of intelligent aided process decision making and reuse of process resource, this paper proposed a decision making method based on rough sets(RS) and regular distance computing. The main contents and methods of process planning decision making are analyzed under agile response manufacturing environment, the concept of process knowledge granule is represented, and the methods of process knowledge granule partitioning and granularity analysis are put forward. Based on the theory of RS and combined the method of process attributes importance identification, the paper brought forward a computing model for process scheme regulation distance under the same attribute conditions, and conflict resolution strategy was introduced to acquire process scheme fit for actual situation of enterprise's manufacturing resources, so as to realize process resources' conflict resolution and quick excavate and reuse of enterprises' existing process knowledge, to advance measures of process decision making and improve the rationality and capability of agile response of process planning.
Blocksome, Michael A.
2011-12-20
Methods, apparatus, and products are disclosed for determining when a set of compute nodes participating in a barrier operation on a parallel computer are ready to exit the barrier operation that includes, for each compute node in the set: initializing a barrier counter with no counter underflow interrupt; configuring, upon entering the barrier operation, the barrier counter with a value in dependence upon a number of compute nodes in the set; broadcasting, by a DMA engine on the compute node to each of the other compute nodes upon entering the barrier operation, a barrier control packet; receiving, by the DMA engine from each of the other compute nodes, a barrier control packet; modifying, by the DMA engine, the value for the barrier counter in dependence upon each of the received barrier control packets; exiting the barrier operation if the value for the barrier counter matches the exit value.
Stephane eGrade
2015-06-01
Full Text Available The perception of reachability (i.e., whether an object is within reach relies on body representations and action simulation. Similarly, egocentric distance estimation (i.e., the perception of the distance an object is from the self is thought to be partly derived from embodied action simulation. Although motor simulation is important for both, it is unclear whether the cognitive processes underlying these behaviors rely on the same motor processes. To investigate this, we measured the impact of a motor interference dual-task paradigm on reachability judgment, egocentric distance estimation, and allocentric length estimation (i.e., how distant two stimuli are from each other independent from the self used as a control task. Participants were required to make concurrent actions with either hand actions of foam ball grip squeezing or arm actions of weight lifting, or no concurrent actions. Results showed that concurrent squeeze actions significantly slowed response speed in the reachability judgment and egocentric distance estimation tasks, but that there was no impact of the concurrent actions on allocentric length estimation. Together, these results suggest that reachability and distance perception, both egocentric perspective tasks, and in contrast to the allocentric perspective task, involve action simulation cognitive processes. The results are discussed in terms of the implication of action simulation when evaluating the position of a target relative to the observer’s body, supporting an embodied view of spatial cognition.
Zhang, Yi-Qing; Cui, Jing; Zhang, Shu-Min; Zhang, Qi; Li, Xiang
2016-02-01
Modelling temporal networks of human face-to-face contacts is vital both for understanding the spread of airborne pathogens and word-of-mouth spreading of information. Although many efforts have been devoted to model these temporal networks, there are still two important social features, public activity and individual reachability, have been ignored in these models. Here we present a simple model that captures these two features and other typical properties of empirical face-to-face contact networks. The model describes agents which are characterized by an attractiveness to slow down the motion of nearby people, have event-triggered active probability and perform an activity-dependent biased random walk in a square box with periodic boundary. The model quantitatively reproduces two empirical temporal networks of human face-to-face contacts which are testified by their network properties and the epidemic spread dynamics on them.
Towards Symbolic Model-Based Mutation Testing: Combining Reachability and Refinement Checking
Aichernig, Bernhard K; 10.4204/EPTCS.80.7
2012-01-01
Model-based mutation testing uses altered test models to derive test cases that are able to reveal whether a modelled fault has been implemented. This requires conformance checking between the original and the mutated model. This paper presents an approach for symbolic conformance checking of action systems, which are well-suited to specify reactive systems. We also consider nondeterminism in our models. Hence, we do not check for equivalence, but for refinement. We encode the transition relation as well as the conformance relation as a constraint satisfaction problem and use a constraint solver in our reachability and refinement checking algorithms. Explicit conformance checking techniques often face state space explosion. First experimental evaluations show that our approach has potential to outperform explicit conformance checkers.
Zhou, Chuan; Chan, Heang-Ping; Sahiner, Berkman; Hadjiiski, Lubomir M; Chughtai, Aamer; Patel, Smita; Wei, Jun; Cascade, Philip N; Kazerooni, Ella A
2009-08-01
The authors are developing a computer-aided detection system for pulmonary emboli (PE) in computed tomographic pulmonary angiography (CTPA) scans. The pulmonary vessel tree is extracted using a 3D expectation-maximization segmentation method based on the analysis of eigen-values of Hessian matrices at multiple scales. A parallel multiprescreening method is applied to the segmented vessels to identify volume of interests (VOIs) that contained suspicious PE. A linear discriminant analysis (LDA) classifier with feature selection is designed to reduce false positives (FPs). Features that characterize the contrast, gray level, and size of PE are extracted as input predictor variables to the LDA classifier. With the IRB approval, 59 CTPA PE cases were collected retrospectively from the patient files (UM cases). With access permission, 69 CTPA PE cases were randomly selected from the data set of the prospective investigation of pulmonary embolism diagnosis (PIOPED) II clinical trial. Extensive lung parenchymal or pleural diseases were present in 22/59 UM and 26/69 PIOPED cases. Experienced thoracic radiologists manually marked 595 and 800 PE as the reference standards in the UM and PIOPED data sets, respectively. PE occlusion of arteries ranged from 5% to 100%, with PE located from the main pulmonary artery to the subsegmental artery levels. Of the 595 PE identified in the UM cases, 245 and 350 PE were located in the subsegmental arteries and the more proximal arteries, respectively. The detection performance was assessed by free response ROC (FROC) analysis. The FROC analysis indicated that the PE detection system could achieve an overall sensitivity of 80% at 18.9 FPs/case for the PIOPED cases when the LDA classifier was trained with the UM cases. The test sensitivity with the UM cases was 80% at 22.6 FPs/cases when the LDA classifier was trained with the PIOPED cases. The detection performance depended on the arterial level where the PE was located and on the
Study of movement coordination in human ensembles via a novel computer-based set-up
Alderisio, Francesco; Fiore, Gianfranco; di Bernardo, Mario
2016-01-01
Movement coordination in human ensembles has been studied little in the current literature. In the existing experimental works, situations where all subjects are connected with each other through direct visual and auditory coupling, and social interaction affects their coordination, have been investigated. Here, we study coordination in human ensembles via a novel computer-based set-up that enables individuals to coordinate each other's motion from a distance so as to minimize the influence of social interaction. The proposed platform makes it possible to implement different visual interaction patterns among the players, so that participants take into consideration the motion of a designated subset of the others. This allows the evaluation of the exclusive effects on coordination of the structure of interconnections among the players and their own dynamics. Our set-up enables also the deployment of virtual players to investigate dyadic interaction between a human and a virtual agent, as well as group synchron...
The use of computer simulations in whole-class versus small-group settings
Smetana, Lara Kathleen
This study explored the use of computer simulations in a whole-class as compared to small-group setting. Specific consideration was given to the nature and impact of classroom conversations and interactions when computer simulations were incorporated into a high school chemistry course. This investigation fills a need for qualitative research that focuses on the social dimensions of actual classrooms. Participants included a novice chemistry teacher experienced in the use of educational technologies and two honors chemistry classes. The study was conducted in a rural school in the south-Atlantic United States at the end of the fall 2007 semester. The study took place during one instructional unit on atomic structure. Data collection allowed for triangulation of evidence from a variety of sources approximately 24 hours of video- and audio-taped classroom observations, supplemented with the researcher's field notes and analytic journal; miscellaneous classroom artifacts such as class notes, worksheets, and assignments; open-ended pre- and post-assessments; student exit interviews; teacher entrance, exit and informal interviews. Four web-based simulations were used, three of which were from the ExploreLearning collection. Assessments were analyzed using descriptive statistics and classroom observations, artifacts and interviews were analyzed using Erickson's (1986) guidelines for analytic induction. Conversational analysis was guided by methods outlined by Erickson (1982). Findings indicated (a) the teacher effectively incorporated simulations in both settings (b) students in both groups significantly improved their understanding of the chemistry concepts (c) there was no statistically significant difference between groups' achievement (d) there was more frequent exploratory talk in the whole-class group (e) there were more frequent and meaningful teacher-student interactions in the whole-class group (f) additional learning experiences not measured on the assessment
Huygens Flavia
2007-08-01
Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs and genes that exhibit presence/absence variation have provided informative marker sets for bacterial and viral genotyping. Identification of marker sets optimised for these purposes has been based on maximal generalized discriminatory power as measured by Simpson's Index of Diversity, or on the ability to identify specific variants. Here we describe the Not-N algorithm, which is designed to identify small sets of genetic markers diagnostic for user-specified subsets of known genetic variants. The algorithm does not treat the user-specified subset and the remaining genetic variants equally. Rather Not-N analysis is designed to underpin assays that provide 0% false negatives, which is very important for e.g. diagnostic procedures for clinically significant subgroups within microbial species. Results The Not-N algorithm has been incorporated into the "Minimum SNPs" computer program and used to derive genetic markers diagnostic for multilocus sequence typing-defined clonal complexes, hepatitis C virus (HCV subtypes, and phylogenetic clades defined by comparative genome hybridization (CGH data for Campylobacter jejuni, Yersinia enterocolitica and Clostridium difficile. Conclusion Not-N analysis is effective for identifying small sets of genetic markers diagnostic for microbial sub-groups. The best results to date have been obtained with CGH data from several bacterial species, and HCV sequence data.
A Computable Plug-In Estimator of Minimum Volume Sets for Novelty Detection
Park, Chiwoo
2010-10-01
A minimum volume set of a probability density is a region of minimum size among the regions covering a given probability mass of the density. Effective methods for finding the minimum volume sets are very useful for detecting failures or anomalies in commercial and security applications-a problem known as novelty detection. One theoretical approach of estimating the minimum volume set is to use a density level set where a kernel density estimator is plugged into the optimization problem that yields the appropriate level. Such a plug-in estimator is not of practical use because solving the corresponding minimization problem is usually intractable. A modified plug-in estimator was proposed by Hyndman in 1996 to overcome the computation difficulty of the theoretical approach but is not well studied in the literature. In this paper, we provide theoretical support to this estimator by showing its asymptotic consistency. We also show that this estimator is very competitive to other existing novelty detection methods through an extensive empirical study. ©2010 INFORMS.
Baum, Karl G; Helguera, María
2007-11-01
SimSET is a package for simulation of emission tomography data sets. Condor is a popular distributed computing environment. Simple C/C++ applications and shell scripts are presented which allow the execution of SimSET on the Condor environment. This is accomplished without any modification to SimSET by executing multiple instances and using its combinebin utility. This enables research facilities without dedicated parallel computing systems to utilize the idle cycles of desktop workstations to greatly reduce the run times of their SimSET simulations. The necessary steps to implement this approach in other environments are presented along with sample results.
A new algorithm for computing the convex hull of a planar point set
无
2007-01-01
When the edges of a convex polygon are traversed along one direction, the interior of the convex polygon is always on the same side of the edges. Based on this characteristic of convex polygons, a new algorithm for computing the convex hull of a simple polygon is proposed in this paper, which is then extended to a new algorithm for computing the convex hull of a planar point set. First, the extreme points of the planar point set are found, and the subsets of point candidate for vertex of the convex hull between extreme points are obtained. Then, the ordered convex hull point sequences between extreme points are constructed separately and concatenated by removing redundant extreme points to get the convex hull. The time complexity of the new planar convex hull algorithm is O(nlogh), which is equal to the time complexity of the best output-sensitive planar convex hull algorithms.Compared with the algorithm having the same complexity, the new algorithm is much faster.
An efficient ERP-based brain-computer interface using random set presentation and face familiarity.
Seul-Ki Yeom
Full Text Available Event-related potential (ERP-based P300 spellers are commonly used in the field of brain-computer interfaces as an alternative channel of communication for people with severe neuro-muscular diseases. This study introduces a novel P300 based brain-computer interface (BCI stimulus paradigm using a random set presentation pattern and exploiting the effects of face familiarity. The effect of face familiarity is widely studied in the cognitive neurosciences and has recently been addressed for the purpose of BCI. In this study we compare P300-based BCI performances of a conventional row-column (RC-based paradigm with our approach that combines a random set presentation paradigm with (non- self-face stimuli. Our experimental results indicate stronger deflections of the ERPs in response to face stimuli, which are further enhanced when using the self-face images, and thereby improving P300-based spelling performance. This lead to a significant reduction of stimulus sequences required for correct character classification. These findings demonstrate a promising new approach for improving the speed and thus fluency of BCI-enhanced communication with the widely used P300-based BCI setup.
Dynamics and control of trajectory tubes theory and computation
Kurzhanski, Alexander B
2014-01-01
This monograph presents theoretical methods involving the Hamilton–Jacobi–Bellman formalism in conjunction with set-valued techniques of nonlinear analysis to solve significant problems in dynamics and control. The emphasis is on issues of reachability, feedback control synthesis under complex state constraints, hard or double bounds on controls, and performance in finite time. Guaranteed state estimation, output feedback control, and hybrid dynamics are also discussed. Although the focus is on systems with linear structure, the authors indicate how to apply each approach to nonlinear and nonconvex systems. The main theoretical results lead to computational schemes based on extensions of ellipsoidal calculus that provide complete solutions to the problems. These computational schemes in turn yield software tools that can be applied effectively to high-dimensional systems. Dynamics and Control of Trajectory Tubes: Theory and Computation will interest graduate and senior undergraduate students, as well as...
Lillian Atsumi Simabuguro Chinem
Full Text Available ABSTRACT Objective: The aim of this study was to compare the equivalent and effective doses of different digital radiographic methods (panoramic, lateral cephalometric and periapical with cone-beam computed tomography (CBCT. Methods: Precalibrated thermoluminescent dosimeters were placed at 24 locations in an anthropomorphic phantom (Alderson Rando Phantom, Alderson Research Laboratories, New York, NY, USA, representing a medium sized adult. The following devices were tested: Heliodent Plus (Sirona Dental Systems, Bernsheim, Germany, Orthophos XG 5 (Sirona Dental Systems, Bernsheim, Germany and i-CAT (Imaging Sciences International, Hatfield, PA, USA. The equivalent doses and effective doses were calculated considering the recommendations of the International Commission of Radiological Protection (ICRP issued in 1990 and 2007. Results: Although the effective dose of the radiographic set corresponded to 17.5% (ICRP 1990 and 47.2% (ICRP 2007 of the CBCT dose, the equivalent doses of skin, bone surface and muscle obtained by the radiographic set were higher when compared to CBCT. However, in some areas, the radiation produced by the orthodontic set was higher due to the complete periapical examination. Conclusion: Considering the optimization principle of radiation protection, i-CAT tomography should be used only in specific and justified circumstances. Additionally, following the ALARA principle, single periapical radiographies covering restricted areas are more suitable than the complete periapical examination.
Chinem, Lillian Atsumi Simabuguro; Vilella, Beatriz de Souza; Maurício, Cláudia Lúcia de Pinho; Canevaro, Lucia Viviana; Deluiz, Luiz Fernando; Vilella, Oswaldo de Vasconcellos
2016-01-01
ABSTRACT Objective: The aim of this study was to compare the equivalent and effective doses of different digital radiographic methods (panoramic, lateral cephalometric and periapical) with cone-beam computed tomography (CBCT). Methods: Precalibrated thermoluminescent dosimeters were placed at 24 locations in an anthropomorphic phantom (Alderson Rando Phantom, Alderson Research Laboratories, New York, NY, USA), representing a medium sized adult. The following devices were tested: Heliodent Plus (Sirona Dental Systems, Bernsheim, Germany), Orthophos XG 5 (Sirona Dental Systems, Bernsheim, Germany) and i-CAT (Imaging Sciences International, Hatfield, PA, USA). The equivalent doses and effective doses were calculated considering the recommendations of the International Commission of Radiological Protection (ICRP) issued in 1990 and 2007. Results: Although the effective dose of the radiographic set corresponded to 17.5% (ICRP 1990) and 47.2% (ICRP 2007) of the CBCT dose, the equivalent doses of skin, bone surface and muscle obtained by the radiographic set were higher when compared to CBCT. However, in some areas, the radiation produced by the orthodontic set was higher due to the complete periapical examination. Conclusion: Considering the optimization principle of radiation protection, i-CAT tomography should be used only in specific and justified circumstances. Additionally, following the ALARA principle, single periapical radiographies covering restricted areas are more suitable than the complete periapical examination. PMID:27653266
Fast computation of categorical richness on raster data sets and related problems
de Berg, Mark; Tsirogiannis, Constantinos; Wilkinson, Bryan T.
2015-01-01
of millions of cells. The categorical richness problem is related to colored range counting, where the goal is to preprocess a colored point set such that we can efficiently count the number of colors appearing inside a query range. We present a data structure for colored range counting in R^2 for the case......In many scientific fields, it is common to encounter raster data sets consisting of categorical data, such as soil type or land usage of a terrain. A problem that arises in the presence of such data is the following: given a raster G of n cells storing categorical data, compute for every cell c...... that runs in O(n) time and one for circular windows that runs in O((1+K/r)n) time, where K is the number of different categories appearing in G. The algorithms are not only very efficient in theory, but also in practice: our experiments show that our algorithms can handle raster data sets of hundreds...
Tesch, Carmen M; de Vivie-Riedle, Regina
2004-12-22
The phase of quantum gates is one key issue for the implementation of quantum algorithms. In this paper we first investigate the phase evolution of global molecular quantum gates, which are realized by optimally shaped femtosecond laser pulses. The specific laser fields are calculated using the multitarget optimal control algorithm, our modification of the optimal control theory relevant for application in quantum computing. As qubit system we use vibrational modes of polyatomic molecules, here the two IR-active modes of acetylene. Exemplarily, we present our results for a Pi gate, which shows a strong dependence on the phase, leading to a significant decrease in quantum yield. To correct for this unwanted behavior we include pressure on the quantum phase in our multitarget approach. In addition the accuracy of these phase corrected global quantum gates is enhanced. Furthermore we could show that in our molecular approach phase corrected quantum gates and basis set independence are directly linked. Basis set independence is also another property highly required for the performance of quantum algorithms. By realizing the Deutsch-Jozsa algorithm in our two qubit molecular model system, we demonstrate the good performance of our phase corrected and basis set independent quantum gates.
Automatic Generation of Minimal Cut Sets
Sentot Kromodimoeljo
2015-06-01
Full Text Available A cut set is a collection of component failure modes that could lead to a system failure. Cut Set Analysis (CSA is applied to critical systems to identify and rank system vulnerabilities at design time. Model checking tools have been used to automate the generation of minimal cut sets but are generally based on checking reachability of system failure states. This paper describes a new approach to CSA using a Linear Temporal Logic (LTL model checker called BT Analyser that supports the generation of multiple counterexamples. The approach enables a broader class of system failures to be analysed, by generalising from failure state formulae to failure behaviours expressed in LTL. The traditional approach to CSA using model checking requires the model or system failure to be modified, usually by hand, to eliminate already-discovered cut sets, and the model checker to be rerun, at each step. By contrast, the new approach works incrementally and fully automatically, thereby removing the tedious and error-prone manual process and resulting in significantly reduced computation time. This in turn enables larger models to be checked. Two different strategies for using BT Analyser for CSA are presented. There is generally no single best strategy for model checking: their relative efficiency depends on the model and property being analysed. Comparative results are given for the A320 hydraulics case study in the Behavior Tree modelling language.
Fedkiw, R P
2003-01-01
In this paper we review the algorithm development and applications in high resolution shock capturing methods, level set methods, and PDE based methods in computer vision and image processing. The emphasis is on Stanley Osher's contribution in these areas and the impact of his work. We will start with shock capturing methods and will review the Engquist-Osher scheme, TVD schemes, entropy conditions, ENO and WENO schemes, and numerical schemes for Hamilton-Jacobi type equations. Among level set methods we will review level set calculus, numerical techniques, fluids and materials, variational approach, high codimension motion, geometric optics, and the computation of discontinuous solutions to Hamilton-Jacobi equations. Among computer vision and image processing we will review the total variation model for image denoising, images on implicit surfaces, and the level set method in image processing and computer vision.
EVOLVE : a Bridge between Probability, Set Oriented Numerics, and Evolutionary Computation II
Coello, Carlos; Tantar, Alexandru-Adrian; Tantar, Emilia; Bouvry, Pascal; Moral, Pierre; Legrand, Pierrick; EVOLVE 2012
2013-01-01
This book comprises a selection of papers from the EVOLVE 2012 held in Mexico City, Mexico. The aim of the EVOLVE is to build a bridge between probability, set oriented numerics and evolutionary computing, as to identify new common and challenging research aspects. The conference is also intended to foster a growing interest for robust and efficient methods with a sound theoretical background. EVOLVE is intended to unify theory-inspired methods and cutting-edge techniques ensuring performance guarantee factors. By gathering researchers with different backgrounds, a unified view and vocabulary can emerge where the theoretical advancements may echo in different domains. Summarizing, the EVOLVE focuses on challenging aspects arising at the passage from theory to new paradigms and aims to provide a unified view while raising questions related to reliability, performance guarantees and modeling. The papers of the EVOLVE 2012 make a contribution to this goal.
A nonclassical symbolic theory of working memory, mental computations, and mental set
Eliashberg, Victor
2009-01-01
The paper tackles four basic questions associated with human brain as a learning system. How can the brain learn to (1) mentally simulate different external memory aids, (2) perform, in principle, any mental computations using imaginary memory aids, (3) recall the real sensory and motor events and synthesize a combinatorial number of imaginary events, (4) dynamically change its mental set to match a combinatorial number of contexts? We propose a uniform answer to (1)-(4) based on the general postulate that the human neocortex processes symbolic information in a "nonclassical" way. Instead of manipulating symbols in a read/write memory, as the classical symbolic systems do, it manipulates the states of dynamical memory representing different temporary attributes of immovable symbolic structures stored in a long-term memory. The approach is formalized as the concept of E-machine. Intuitively, an E-machine is a system that deals mainly with characteristic functions representing subsets of memory pointers rather ...
Review: High-performance computing to detect epistasis in genome scale data sets.
Upton, Alex; Trelles, Oswaldo; Cornejo-García, José Antonio; Perkins, James Richard
2016-05-01
It is becoming clear that most human diseases have a complex etiology that cannot be explained by single nucleotide polymorphisms (SNPs) or simple additive combinations; the general consensus is that they are caused by combinations of multiple genetic variations. The limited success of some genome-wide association studies is partly a result of this focus on single genetic markers. A more promising approach is to take into account epistasis, by considering the association of multiple SNP interactions with disease. However, as genomic data continues to grow in resolution, and genome and exome sequencing become more established, the number of combinations of variants to consider increases rapidly. Two potential solutions should be considered: the use of high-performance computing, which allows us to consider a larger number of variables, and heuristics to make the solution more tractable, essential in the case of genome sequencing. In this review, we look at different computational methods to analyse epistatic interactions within disease-related genetic data sets created by microarray technology. We also review efforts to use epistatic analysis results to produce biomarkers for diagnostic tests and give our views on future directions in this field in light of advances in sequencing technology and variants in non-coding regions.
Heye, Tobias; Kauczor, Hans-Ulrich; Szabo, Gabor; Hosch, Waldemar
2014-03-01
There is a high probability for presence of irregular heart rates and artifacts in patients with previous coronary artery bypass graft (CABG) surgery. Previously reported diagnostic performance of ECG-gated 64-slice dual-source computer tomography angiography (CTA) in this patient group is based on pre-selection for normal heart rate and routine clinical setting. To investigate image quality and diagnostic performance of CTA in patients with previous CABG surgery in various clinical settings. Fifty-six non-selected, consecutive patients (110 grafts, 44 arterial, 66 venous) with previous CABG surgery were prospectively examined using a dual-source 64-slice CT (Siemens Definition, Forchheim, Germany) without utilization of CT-related pharmaceutical heart rate control. Patients were stratified according to the clinical setting: planned redo-cardiac surgery; emergency CTA within 30 days after CABG surgery; routine follow-up after CABG surgery. A reference standard was available for 30 patients (53.6%; 67/110 grafts). Image quality, artifacts, and graft patency were independently assessed by two observers. All CTAs were diagnostic despite the presence of irregular heart rhythm (25% of cases) and artifacts (72.7% of grafts). CTA was accurate in all patient groups in assessing graft patency (97.9% sensitivity; 100% specificity; 98.5% accuracy) but artifacts decreased diagnostic performance for stenosis detection (60% sensitivity; 88.6% specificity; 84.1% accuracy). Arterial grafts exhibited more surgical clip artifacts compared to venous grafts, which predominantly showed motion artifacts. Overall diagnostic quality was rated excellent in 70.9%/56.4%, good in 23.4%/39.1%, and sufficient in 5.5%/4.5% by each observer, respectively. CTA detected acute findings in 10 cases (graft bleeding, graft occlusion, pericardial hematoma, sternal instability with retrosternal abscess formation, pericardial effusion, left ventricle thrombus) in the emergency group; seven cases required
Chen, Yi-Ting; Horng, Mong-Fong; Lo, Chih-Cheng; Chu, Shu-Chuan; Pan, Jeng-Shyang; Liao, Bin-Yih
2013-03-20
Transmission power optimization is the most significant factor in prolonging the lifetime and maintaining the connection quality of wireless sensor networks. Un-optimized transmission power of nodes either interferes with or fails to link neighboring nodes. The optimization of transmission power depends on the expected node degree and node distribution. In this study, an optimization approach to an energy-efficient and full reachability wireless sensor network is proposed. In the proposed approach, an adjustment model of the transmission range with a minimum node degree is proposed that focuses on topology control and optimization of the transmission range according to node degree and node density. The model adjusts the tradeoff between energy efficiency and full reachability to obtain an ideal transmission range. In addition, connectivity and reachability are used as performance indices to evaluate the connection quality of a network. The two indices are compared to demonstrate the practicability of framework through simulation results. Furthermore, the relationship between the indices under the conditions of various node degrees is analyzed to generalize the characteristics of node densities. The research results on the reliability and feasibility of the proposed approach will benefit the future real deployments.
Bin-Yih Liao
2013-03-01
Full Text Available Transmission power optimization is the most significant factor in prolonging the lifetime and maintaining the connection quality of wireless sensor networks. Un-optimized transmission power of nodes either interferes with or fails to link neighboring nodes. The optimization of transmission power depends on the expected node degree and node distribution. In this study, an optimization approach to an energy-efficient and full reachability wireless sensor network is proposed. In the proposed approach, an adjustment model of the transmission range with a minimum node degree is proposed that focuses on topology control and optimization of the transmission range according to node degree and node density. The model adjusts the tradeoff between energy efficiency and full reachability to obtain an ideal transmission range. In addition, connectivity and reachability are used as performance indices to evaluate the connection quality of a network. The two indices are compared to demonstrate the practicability of framework through simulation results. Furthermore, the relationship between the indices under the conditions of various node degrees is analyzed to generalize the characteristics of node densities. The research results on the reliability and feasibility of the proposed approach will benefit the future real deployments.
A Proposal to Speed up the Computation of the Centroid of an Interval Type-2 Fuzzy Set
Carlos E. Celemin
2013-01-01
Full Text Available This paper presents two new algorithms that speed up the centroid computation of an interval type-2 fuzzy set. The algorithms include precomputation of the main operations and initialization based on the concept of uncertainty bounds. Simulations over different kinds of footprints of uncertainty reveal that the new algorithms achieve computation time reductions with respect to the Enhanced-Karnik algorithm, ranging from 40 to 70%. The results suggest that the initialization used in the new algorithms effectively reduces the number of iterations to compute the extreme points of the interval centroid while precomputation reduces the computational cost of each iteration.
Marshall, J.; Sauke, T.
1999-01-01
Electrostatic forces strongly influence the behavior of granular materials in both dispersed (cloud) systems and semi-packed systems. These forces can cause aggregation or dispersion of particles and are important in a variety of astrophysical and planetary settings. There are also many industrial and commercial settings where granular matter and electrostatics become partners for both good and bad. This partnership is important for human exploration on Mars where dust adheres to suits, machines, and habitats. Long-range Coulombic (electrostatic) forces, as opposed to contact-induced dipoles and van der Waals attractions, are generally regarded as resulting from net charge. We have proposed that in addition to net charge interactions, randomly distributed charge carriers on grains will result in a dipole moment regardless of any net charge. If grains are unconfined, or fluidized, they will rotate so that the dipole always induces attraction between grains. Aggregates are readily formed, and Coulombic polarity resulting from the dipole produces end-to-end stacking of grains to form filamentary aggregates. This has been demonstrated in USML experiments on Space Shuttle where microgravity facilitated the unmasking of static forces. It has also been demonstrated in a computer model using grains with charge carriers of both sign. Model results very closely resembled micro-g results with actual sand grains. Further computer modeling of the aggregation process has been conducted to improve our understanding of the aggregation process, and to provide a predictive tool for microgravity experiments slated for Space Station. These experiments will attempt to prove the dipole concept as outlined above. We have considerably enhanced the original computer model: refinements to the algorithm have improved the fidelity of grain behavior during grain contact, special attention has been paid to simulation time steps to enable establishment of a meaningful, quantitative time axis
Toward accurate tooth segmentation from computed tomography images using a hybrid level set model
Gan, Yangzhou; Zhao, Qunfei [Department of Automation, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240 (China); Xia, Zeyang, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn; Hu, Ying [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and The Chinese University of Hong Kong, Shenzhen 518055 (China); Xiong, Jing, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 510855 (China); Zhang, Jianwei [TAMS, Department of Informatics, University of Hamburg, Hamburg 22527 (Germany)
2015-01-15
Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0
Zhou, Chuan; Chan, Heang-Ping; Sahiner, Berkman; Hadjiiski, Lubomir M.; Chughtai, Aamer; Patel, Smita; Wei, Jun; Cascade, Philip N.; Kazerooni, Ella A.
2009-01-01
The authors are developing a computer-aided detection system for pulmonary emboli (PE) in computed tomographic pulmonary angiography (CTPA) scans. The pulmonary vessel tree is extracted using a 3D expectation-maximization segmentation method based on the analysis of eigenvalues of Hessian matrices at multiple scales. A parallel multiprescreening method is applied to the segmented vessels to identify volume of interests (VOIs) that contained suspicious PE. A linear discriminant analysis (LDA) ...
Nurses' Utilization of Computers to Document Nursing Care in the Hospital Setting
Summers, Sharon; Ratliff, Cheryl; Becker, Ann; Resler, Marion
1989-01-01
A descriptive study was conducted where 228 nationwide randomly selected hospital Directors of Nursing were surveyed regarding computer utilization by hospital nurses. Analysis of the data indicated a positive attitude toward computers by nurses, however, nonnursing data entry was first priority with most computer systems. Problems identified were a varied patient to computer terminal ratio that made it difficult to gain access to the system. Another problem was the slow implementation of nur...
Computer Anxiety and Performance: An Application of a Change Model in a Pedagogical Setting.
Desai, Mayur S.
2001-01-01
Discusses the adverse effects of computer anxiety on student performance and reports an application of a change management process to a class on computers in business that attempted to reduce computer anxiety and improve learning and performance through a pedagogical intervention. Considers implications of results that showed lower anxiety but not…
Fu, Wanyi; Marin, Daniele; Ramirez-Giraldo, Juan Carlos; Choudhury, Kingshuk Roy; Solomon, Justin; Schabel, Christoph; Patel, Bhavik N; Samei, Ehsan
2017-08-04
Dual-energy computed tomography virtual monoenergetic imaging (VMI) at 40 keV exhibits superior contrast-to-noise ratio (CNR), although practicing radiologists do not consistently prefer it over VMI at 70 keV due to high perceivable noise. We hypothesize that the presentation of 40 keV VMI may be compromised using window settings (i.e., window-and-level values [W-L values]) designed for conventional single-energy CT. This study aimed to devise optimum window settings that reduce the apparent noise and utilize the high CNR of 40 keV VMI, in order to improve the conspicuity of hypervascular liver lesions. Three W-L value adjustment methods were investigated to alter the presentation of 40 keV VMI. To harness the high CNR of 40 keV VMI, the methods were designed to achieve (a) liver histogram distribution, (b) lesion-to-liver contrast, or (c) liver background noise comparable to those perceived in 70 keV VMI. This IRB-approved study included 18 patient abdominal datasets reconstructed at 40 and 70 keV. For each patient, the W-L values were determined using the three methods. For each of the images with default or adjusted W-L values, the noise, contrast, and CNR were calculated in terms of both display space and native CT number (referred to as HU) space. An observer study was performed to compare the 40 keV images with the three adjusted W-L values, and 40 and 70 keV images with default W-L values in terms of noise, contrast, and diagnostic preference. A comparison was also made in terms of the applicability of using patient-specific or patient-averaged W-L values. Using the default W-L values, 40 keV VMI exhibited higher HU CNR than 70 keV VMI by 24.6 ± 14.9% (P VMI dataset image quality by improving the actual display CNR. © 2017 American Association of Physicists in Medicine.
Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami
2017-03-27
Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Results: Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs.
Eduardo Mireles-Cabodevila
2012-01-01
Full Text Available Background. There are modes of mechanical ventilation that can select ventilator settings with computer controlled algorithms (targeting schemes. Two examples are adaptive support ventilation (ASV and mid-frequency ventilation (MFV. We studied how different clinician-chosen ventilator settings are from these computer algorithms under different scenarios. Methods. A survey of critical care clinicians provided reference ventilator settings for a 70 kg paralyzed patient in five clinical/physiological scenarios. The survey-derived values for minute ventilation and minute alveolar ventilation were used as goals for ASV and MFV, respectively. A lung simulator programmed with each scenario’s respiratory system characteristics was ventilated using the clinician, ASV, and MFV settings. Results. Tidal volumes ranged from 6.1 to 8.3 mL/kg for the clinician, 6.7 to 11.9 mL/kg for ASV, and 3.5 to 9.9 mL/kg for MFV. Inspiratory pressures were lower for ASV and MFV. Clinician-selected tidal volumes were similar to the ASV settings for all scenarios except for asthma, in which the tidal volumes were larger for ASV and MFV. MFV delivered the same alveolar minute ventilation with higher end expiratory and lower end inspiratory volumes. Conclusions. There are differences and similarities among initial ventilator settings selected by humans and computers for various clinical scenarios. The ventilation outcomes are the result of the lung physiological characteristics and their interaction with the targeting scheme.
SPARQL for a Web of Linked Data: Semantics and Computability (Extended Version)
Hartig, Olaf
2012-01-01
The World Wide Web currently evolves into a Web of Linked Data where content providers publish and link data as they have done with hypertext for the last 20 years. While the declarative query language SPARQL is the de facto for querying a-priory defined sets of data from the Web, no language exists for querying the Web of Linked Data itself. However, it seems natural to ask whether SPARQL is also suitable for such a purpose. In this paper we formally investigate the applicability of SPARQL as a query language for Linked Data on the Web. In particular, we study two query models: 1) a full-Web semantics where the scope of a query is the complete set of Linked Data on the Web and 2) a family of reachability-based semantics which restrict the scope to data that is reachable by traversing certain data links. For both models we discuss properties such as monotonicity and computability as well as the implications of querying a Web that is infinitely large due to data generating servers.
Mok, Heng Ngee; Lee, Yeow Leong; Tan, Wee Kiat
2012-01-01
This paper describes how a generic computer laboratory equipped with 52 workstations is set up for teaching IT-related courses and other general purpose usage. The authors have successfully constructed a lab management system based on decentralised, client-side software virtualisation technology using Linux and free software tools from VMware that…
Ram Lal Awasthi
2016-02-01
The grand unification theories based on SO(10) gauge group have been at the centre of attraction to beyond Standard Model phenomenology. The SO(10) gauge symmetry may pass through several intermediate symmetries before breaking to Standard Model. Therefore some higher symmetries may occur at the experimentally reachable scales. This feature flourishes easily in non-supersymmetric models compared to supersymmetric ones. We find that certain breaking chains give tremendous predictions for the physics being explored at various particle physics experiments. Explanation to neutrino masses through TeV scale inverse see-saw is the driving theme of the models studied.
JAG: A Computational Tool to Evaluate the Role of Gene-Sets in Complex Traits.
Lips, Esther S; Kooyman, Maarten; de Leeuw, Christiaan; Posthuma, Danielle
2015-05-14
Gene-set analysis has been proposed as a powerful tool to deal with the highly polygenic architecture of complex traits, as well as with the small effect sizes typically found in GWAS studies for complex traits. We developed a tool, Joint Association of Genetic variants (JAG), which can be applied to Genome Wide Association (GWA) data and tests for the joint effect of all single nucleotide polymorphisms (SNPs) located in a user-specified set of genes or biological pathway. JAG assigns SNPs to genes and incorporates self-contained and/or competitive tests for gene-set analysis. JAG uses permutation to evaluate gene-set significance, which implicitly controls for linkage disequilibrium, sample size, gene size, the number of SNPs per gene and the number of genes in the gene-set. We conducted a power analysis using the Wellcome Trust Case Control Consortium (WTCCC) Crohn's disease data set and show that JAG correctly identifies validated gene-sets for Crohn's disease and has more power than currently available tools for gene-set analysis. JAG is a powerful, novel tool for gene-set analysis, and can be freely downloaded from the CTG Lab website.
JAG: A Computational Tool to Evaluate the Role of Gene-Sets in Complex Traits
Esther S. Lips
2015-05-01
Full Text Available Gene-set analysis has been proposed as a powerful tool to deal with the highly polygenic architecture of complex traits, as well as with the small effect sizes typically found in GWAS studies for complex traits. We developed a tool, Joint Association of Genetic variants (JAG, which can be applied to Genome Wide Association (GWA data and tests for the joint effect of all single nucleotide polymorphisms (SNPs located in a user-specified set of genes or biological pathway. JAG assigns SNPs to genes and incorporates self-contained and/or competitive tests for gene-set analysis. JAG uses permutation to evaluate gene-set significance, which implicitly controls for linkage disequilibrium, sample size, gene size, the number of SNPs per gene and the number of genes in the gene-set. We conducted a power analysis using the Wellcome Trust Case Control Consortium (WTCCC Crohn’s disease data set and show that JAG correctly identifies validated gene-sets for Crohn’s disease and has more power than currently available tools for gene-set analysis. JAG is a powerful, novel tool for gene-set analysis, and can be freely downloaded from the CTG Lab website.
Raffaelli, Marcela; Armstrong, Jessica; Tran, Steve P; Griffith, Aisha N; Walker, Kathrin; Gutierrez, Vanessa
2016-06-01
Computer-assisted data collection offers advantages over traditional paper and pencil measures; however, little guidance is available regarding the logistics of conducting computer-assisted data collection with adolescents in group settings. To address this gap, we draw on our experiences conducting a multi-site longitudinal study of adolescent development. Structured questionnaires programmed on laptop computers using Audio Computer Assisted Self-Interviewing (ACASI) were administered to groups of adolescents in community-based and afterschool programs. Although implementing ACASI required additional work before entering the field, we benefited from reduced data processing time, high data quality, and high levels of youth motivation. Preliminary findings from an ethnically diverse sample of 265 youth indicate favorable perceptions of using ACASI. Using our experiences as a case study, we provide recommendations on selecting an appropriate data collection device (including hardware and software), preparing and testing the ACASI, conducting data collection in the field, and managing data.
The spinal posture of computing adolescents in a real-life setting
Brink, Yolandi; Louw,Quinette; Grimmer, Karen; Jordaan, Esmè
2014-01-01
Background It is assumed that good postural alignment is associated with the less likelihood of musculoskeletal pain symptoms. Encouraging good sitting postures have not reported consequent musculoskeletal pain reduction in school-based populations, possibly due to a lack of clear understanding of good posture. Therefore this paper describes the variability of postural angles in a cohort of asymptomatic high-school students whilst working on desk-top computers in a school computer classroom a...
Safi, Seyed Mohammad Amin
2016-01-01
Multiphase flow simulations benefit a variety of applications in science and engineering as for example in the dynamics of bubble swarms in heat exchangers and chemical reactors or in the prediction of the effects of droplet or bubble impacts in the design of turbomachinery systems. Despite all the progress in the modern computational fluid dynamics (CFD), such simulations still present formidable challenges both from numerical and computational cost point of view. Emerging as ...
Computer simulation of the behaviour of Julia sets using switching processes
Negi, Ashish [Department of Computer Science and Engineering, G.B. Pant Engineering College, Pauri Garhwal 246001 (India)], E-mail: ashish_ne@yahoo.com; Rani, Mamta [Department of Computer Science, Galgotia College of Engineering and Technology, UP Technical University, Knowledge Park-II, Greater Noida, Gautam Buddha Nagar, UP (India)], E-mail: vedicmri@sancharnet.in; Mahanti, P.K. [Department of CSAS, University of New Brunswick, Saint Johhn, New Brunswick, E2L4L5 (Canada)], E-mail: pmahanti@unbsj.ca
2008-08-15
Inspired by the study of Julia sets using switched processes by Lakhtakia and generation of new fractals by composite functions by Shirriff, we study the effect of switched processes on superior Julia sets given by Rani and Kumar. Further, symmetry for such processes is also discussed in the paper.
Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami
2017-08-01
Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs. The software is implemented in Matlab, and is provided as supplementary information . hyunseob.song@pnnl.gov. Supplementary data are available at Bioinformatics online.
Kosterev, V.V.; Boliatko, V.V.; Gusev, S.M.; Panin, M.P. [MEPhI, Moscow (Russian Federation); Averkin, A.N. [CC RAS, Moscox (Russian Federation)
1998-07-01
Computer software for risk assessment of transportation of important freight has been developed. It incorporates models of transport accidents, including terrorist attacks. These models use, among the others, input data of cartographic character. Geographic information system technology and electronic maps of a geographic area are involved as an instrument for handling this kind of data. Fuzzy set theory methods as well as standard methods of probability theory have been used for quantitative risk assessment. Fuzzy algebraic operations and their computer realization are discussed. Risk assessment for one particular route of railway transportation is given as an example. (author)
Miller, M C; Reus, J F; Matzke, R P; Arrighi, W J; Schoof, L A; Hitt, R T; Espen, P K; Butler, D M
2001-02-07
This paper describes the Sets and Fields (SAF) scientific data modeling system. It is a revolutionary approach to interoperation of high performance, scientific computing applications based upon rigorous, math-oriented data modeling principles. Previous technologies have required all applications to use the same data structures and/or meshes to represent scientific data or lead to an ever expanding set of incrementally different data structures and/or meshes. SAF addresses this problem by providing a small set of mathematical building blocks--sets, relations and fields--out of which a wide variety of scientific data can be characterized. Applications literally model their data by assembling these building blocks. A short historical perspective, a conceptual model and an overview of SAF along with preliminary results from its use in a few ASCI codes are discussed.
Problem Decomposition Method to Compute an Optimal Cover for a Set of Functional Dependencies
Vitalie COTELEA
2011-12-01
Full Text Available The paper proposes a problem decomposition method for building optimal cover for a set of functional dependencies to decrease the solving time. At the beginning, the paper includes an overview of the covers of functional dependencies. There are considered definitions and properties of non redundant covers for sets of functional dependencies, reduced and canonical covers as well as equivalence classes of functional dependencies, minimum and optimal covers. Then, a theoretical tool for inference of functional dependencies is proposed, which possesses the uniqueness property. And finally, the set of attributes of the relational schema is divided into equivalence classes of attributes that will serve as the basis for building optimal cover for a set of functional dependencies.
Finitely approximable random sets and their evolution via differential equations
Ananyev, B. I.
2016-12-01
In this paper, random closed sets (RCS) in Euclidean space are considered along with their distributions and approximation. Distributions of RCS may be used for the calculation of expectation and other characteristics. Reachable sets on initial data and some ways of their approximate evolutionary description are investigated for stochastic differential equations (SDE) with initial state in some RCS. Markov property of random reachable sets is proved in the space of closed sets. For approximate calculus, the initial RCS is replaced by a finite set on the integer multidimensional grid and the multistage Markov chain is substituted for SDE. The Markov chain is constructed by methods of SDE numerical integration. Some examples are also given.
A Project-Based Learning Setting to Human-Computer Interaction for Teenagers
Geyer, Cornelia; Geisler, Stefan
2012-01-01
Knowledge of fundamentals of human-computer interaction resp. usability engineering is getting more and more important in technical domains. However this interdisciplinary field of work and corresponding degree programs are not broadly known. Therefore at the Hochschule Ruhr West, University of Applied Sciences, a program was developed to give…
Creating computable algorithms for symptom management in an outpatient thoracic oncology setting.
Cooley, Mary E; Lobach, David F; Johns, Ellis; Halpenny, Barbara; Saunders, Toni-Ann; Del Fiol, Guilherme; Rabin, Michael S; Calarese, Pamela; Berenbaum, Isidore L; Zaner, Ken; Finn, Kathleen; Berry, Donna L; Abrahm, Janet L
2013-12-01
Adequate symptom management is essential to ensure quality cancer care, but symptom management is not always evidence based. Adapting and automating national guidelines for use at the point of care may enhance use by clinicians. This article reports on a process of adapting research evidence for use in a clinical decision support system that provided individualized symptom management recommendations to clinicians at the point of care. Using a modified ADAPTE process, panels of local experts adapted national guidelines and integrated research evidence to create computable algorithms with explicit recommendations for management of the most common symptoms (pain, fatigue, dyspnea, depression, and anxiety) associated with lung cancer. Small multidisciplinary groups and a consensus panel, using a nominal group technique, modified and subsequently approved computable algorithms for fatigue, dyspnea, moderate pain, severe pain, depression, and anxiety. The approved algorithms represented the consensus of multidisciplinary clinicians on pharmacological and behavioral interventions tailored to the patient's age, comorbidities, laboratory values, current medications, and patient-reported symptom severity. Algorithms also were reconciled with one another to enable simultaneous management of several symptoms. A modified ADAPTE process and nominal group technique enabled the development and approval of locally adapted computable algorithms for individualized symptom management in patients with lung cancer. The process was more complex and required more time and resources than initially anticipated, but it resulted in computable algorithms that represented the consensus of many experts. Copyright © 2013 U.S. Cancer Pain Relief Committee. Published by Elsevier Inc. All rights reserved.
Bekooij, Marco; Wiggers, Maarten; Meerbergen, van Jef; Falk, H.; Marwedel, P.
2007-01-01
Soft real-time applications that process data streams can often be intuitively described as dataflow process networks. In this paper we present a novel analysis technique to compute conservative estimates of the required buffer capacities in such process networks. With the same analysis technique sc
2013-01-08
... institution named as respondent Apple Inc., a/k/a Apple Computer, Inc. of Cupertino, California (``Apple... February 27, 2012, based upon a complaint filed on behalf of VIA Technologies, Inc. of New Taipei City, Taiwan; IP-First, LLC of Fremont, California; and Centaur Technology, Inc. of Austin, Texas...
Rachael E Moorin
Full Text Available To evaluate the effect of introduction of iterative reconstruction as a mandated software upgrade on radiation dosimetry in routine clinical practice over a range of computed tomography examinations.Random samples of scanning data were extracted from a centralised Picture Archiving Communication System pertaining to 10 commonly performed computed tomography examination types undertaken at two hospitals in Western Australia, before and after the introduction of iterative reconstruction. Changes in the mean dose length product and effective dose were evaluated along with estimations of associated changes to annual cancer incidence.We observed statistically significant reductions in the effective radiation dose for head computed tomography (22-27% consistent with those reported in the literature. In contrast the reductions observed for non-contrast chest (37-47%; chest pulmonary embolism study (28%, chest/abdominal/pelvic study (16% and thoracic spine (39% computed tomography. Statistically significant reductions in radiation dose were not identified in angiographic computed tomography. Dose reductions translated to substantial lowering of the lifetime attributable risk, especially for younger females, and estimated numbers of incident cancers.Reduction of CT dose is a priority Iterative reconstruction algorithms have the potential to significantly assist with dose reduction across a range of protocols. However, this reduction in dose is achieved via reductions in image noise. Fully realising the potential dose reduction of iterative reconstruction requires the adjustment of image factors and forgoing the noise reduction potential of the iterative algorithm. Our study has demonstrated a reduction in radiation dose for some scanning protocols, but not to the extent experimental studies had previously shown or in all protocols expected, raising questions about the extent to which iterative reconstruction achieves dose reduction in real world clinical
Shalbaf, Farzaneh; Dokos, Socrates; Lovell, Nigel H.; Turuwhenua, Jason; Vaghefi, Ehsan
2015-12-01
Retinal prosthesis has been proposed to restore vision for those suffering from the retinal pathologies that mainly affect the photoreceptors layer but keep the inner retina intact. Prior to costly risky experimental studies computational modelling of the retina will help to optimize the device parameters and enhance the outcomes. Here, we developed an anatomically detailed computational model of the retina based on OCT data sets. The consecutive OCT images of individual were subsequently segmented to provide a 3D representation of retina in the form of finite elements. Thereafter, the electrical properties of the retina were modelled by implementing partial differential equation on the 3D mesh. Different electrode configurations, that is bipolar and hexapolar configurations, were implemented and the results were compared with the previous computational and experimental studies. Furthermore, the possible effects of the curvature of retinal layers on the current steering through the retina were proposed and linked to the clinical observations.
A set of strongly coupled, upwind algorithms for computing flows in chemical nonequilibrium
Molvik, Gregory A.; Merkle, Charles L.
1989-01-01
Two new algorithms have been developed to predict the flow of viscous, hypersonic, chemically reacting gases over three-dimensional bodies. Both take advantage of the benefits of upwind differencing, Total Variation Diminishing (TVD) techniques and of a finite-volume framework, but obtain their solution in two separate manners. The first algorithym is a time-marching scheme, and is generally used to obtain solutions in the subsonic portions of the flow field. The second algorithm is a much less expensive, space-marching scheme and can be used for the computation of the larger, supersonic portion of the flow field. Both codes compute their interface fluxes with a new temporal Riemann solver and the resulting schemes are made fully implicit including the chemical source terms.
Fuzzy Sets and Other Methods for Telling a Computer How to Decide
1982-02-01
fuzzy statements. The attributes are visibility and fuel reserves in one case, fuel efficiency, price and sportiness in the other. The computer uses...or sportiness could be represented as a mathematical function of the wind friction factor, color, number of doors and heiqht, bu-. it is very... sporty car 2 door sedan j 4 door sedan 2 door hatchback 4 door hatchback statior i wagon I pickup truck ATTR. 4 SIZE subcompact compact
The total emulation of the intel 8080 instruction set on a mainframe computer.
Leggett, D J
1982-03-01
A software system, ASSIM-8080, has been developed to permit the writing and debugging of Intel 8080 assembly-language programs with the aid of mainframe computers. ASSIM-8080 will assemble, with error checking and error diagnostics, an assembly-language program. If no errors are found in the source code, ASSIM-8080 will then simulate the execution of the assembly program. ASSIM-8080 will recognize a number of special instruction codes designed to simplify programming and debugging.
Geometric invariants for initial data sets: analysis, exact solutions, computer algebra, numerics
Valiente Kroon, Juan A, E-mail: j.a.valiente-kroon@qmul.ac.uk [School of Mathematical Sciences, Queen Mary, University of London, Mile End Road, London, E1 4NS (United Kingdom)
2011-09-22
A personal perspective on the interaction of analytical, numerical and computer algebra methods in classical Relativity is given. This discussion is inspired by the problem of the construction of invariants that characterise key solutions to the Einstein field equations. It is claimed that this kind of ideas will be or importance in the analysis of dynamical black hole spacetimes by either analytical or numerical methods.
Ochterski, Joseph W.
2014-01-01
This article describes the results of using state-of-the-art, research-quality software as a learning tool in a general chemistry secondary school classroom setting. I present three activities designed to introduce fundamental chemical concepts regarding molecular shape and atomic orbitals to students with little background in chemistry, such as…
Expectations, Realizations, and Approval of Tablet Computers in an Educational Setting
Hassan, Mamdouh; Geys, Benny
2016-01-01
The introduction of new technologies in classrooms is often thought to offer great potential for advancing learning. In this article, we investigate the relationship between such expectations and the post-implementation evaluation of a new technology in an educational setting. Building on psychological research, we argue that (1) high expectations…
Duflot, Nicolas [Universite de technologie de Troyes, Institut Charles Delaunay/LM2S, FRE CNRS 2848, 12, rue Marie Curie, BP2060, F-10010 Troyes cedex (France)], E-mail: nicolas.duflot@areva.com; Berenguer, Christophe [Universite de technologie de Troyes, Institut Charles Delaunay/LM2S, FRE CNRS 2848, 12, rue Marie Curie, BP2060, F-10010 Troyes cedex (France)], E-mail: christophe.berenguer@utt.fr; Dieulle, Laurence [Universite de technologie de Troyes, Institut Charles Delaunay/LM2S, FRE CNRS 2848, 12, rue Marie Curie, BP2060, F-10010 Troyes cedex (France)], E-mail: laurence.dieulle@utt.fr; Vasseur, Dominique [EPSNA Group (Nuclear PSA and Application), EDF Research and Development, 1, avenue du Gal de Gaulle, 92141 Clamart cedex (France)], E-mail: dominique.vasseur@edf.fr
2009-11-15
A truncation process aims to determine among the set of minimal cut-sets (MCS) produced by a probabilistic safety assessment (PSA) model which of them are significant. Several truncation processes have been proposed for the evaluation of the probability of core damage ensuring a fixed accuracy level. However, the evaluation of new risk indicators as importance measures requires to re-examine the truncation process in order to ensure that the produced estimates will be accurate enough. In this paper a new truncation process is developed permitting to estimate from a single set of MCS the importance measure of any basic event with the desired accuracy level. The main contribution of this new method is to propose an MCS-wise truncation criterion involving two thresholds: an absolute threshold in addition to a new relative threshold concerning the potential probability of the MCS of interest. The method has been tested on a complete level 1 PSA model of a 900 MWe NPP developed by 'Electricite de France' (EDF) and the results presented in this paper indicate that to reach the same accuracy level the proposed method produces a set of MCS whose size is significantly reduced.
Ochterski, Joseph W.
2014-01-01
This article describes the results of using state-of-the-art, research-quality software as a learning tool in a general chemistry secondary school classroom setting. I present three activities designed to introduce fundamental chemical concepts regarding molecular shape and atomic orbitals to students with little background in chemistry, such as…
Nedjar, B.
The present work deals with the extension to the geometrically nonlinear case of recently proposed ideas on elastic- and elastoplastic-damage modelling frameworks within the infinitesimal theory. The particularity of these models is that the damage part of the modelling involves the gradient of damage quantity which, together with the equations of motion, are ensuing from a new formulation of the principle of virtual power. It is shown how the thermodynamics of irreversible processes is crucial in the characterization of the dissipative phenomena and in setting the convenient forms for the constitutive relations. On the numerical side, we discuss the problem of numerically integrating these equations and the implementation within the context of the finite element method is described in detail. And finally, we present a set of representative numerical simulations to illustrate the effectiveness of the proposed framework.
Rediscovering the Economics of Keynes in an Agent-Based Computational Setting
Bruun, Charlotte
The aim of this paper is to use agent-based computational economics to explore the economic thinking of Keynes. Taking his starting point at the macroeconomic level, Keynes argued that economic systems are characterized by fundamental uncertainty - an uncertainty that makes rule-based behaviour...... and reliance on monetary magnitudes more optimal to the economic agent than profit- and utility optimazation in the traditional sense. Unfortunately more systematic studies of the properties of such a system was not possible at the time of Keynes. The system envisioned by Keynes holds a lot of properties...
Azad Ali
2016-05-01
Full Text Available The most common course delivery model is based on teacher (knowledge provider - student (knowledge receiver relationship. The most visible symptom of this situation is over-reliance on textbook’s tutorials. This traditional model of delivery reduces teacher flexibility, causes lack of interest among students, and often makes classes boring. Especially this is visible when teaching Computer Literacy courses. Instead, authors of this paper suggest a new active model which is based on MS Office simulation. The proposed model was discussed within the framework of three activities: guided software simulation, instructor-led activities, and self-directed learning activities. The model proposed in the paper of active teaching based on software simulation was proven as more effective than traditional.
Silengo Lorenzo
2004-05-01
Full Text Available Abstract Background Transcriptional regulation is a key mechanism in the functioning of the cell, and is mostly effected through transcription factors binding to specific recognition motifs located upstream of the coding region of the regulated gene. The computational identification of such motifs is made easier by the fact that they often appear several times in the upstream region of the regulated genes, so that the number of occurrences of relevant motifs is often significantly larger than expected by pure chance. Results To exploit this fact, we construct sets of genes characterized by the statistical overrepresentation of a certain motif in their upstream regions. Then we study the functional characterization of these sets by analyzing their annotation to Gene Ontology terms. For the sets showing a statistically significant specific functional characterization, we conjecture that the upstream motif characterizing the set is a binding site for a transcription factor involved in the regulation of the genes in the set. Conclusions The method we propose is able to identify many known binding sites in S. cerevisiae and new candidate targets of regulation by known transcritpion factors. Its application to less well studied organisms is likely to be valuable in the exploration of their regulatory interaction network.
Computing a Finite Size Representation of the Set of Approximate Solutions of an MOP
Schuetze, Oliver; Tantar, Emilia; Talbi, El-Ghazali
2008-01-01
Recently, a framework for the approximation of the entire set of $\\epsilon$-efficient solutions (denote by $E_\\epsilon$) of a multi-objective optimization problem with stochastic search algorithms has been proposed. It was proven that such an algorithm produces -- under mild assumptions on the process to generate new candidate solutions --a sequence of archives which converges to $E_{\\epsilon}$ in the limit and in the probabilistic sense. The result, though satisfactory for most discrete MOPs, is at least from the practical viewpoint not sufficient for continuous models: in this case, the set of approximate solutions typically forms an $n$-dimensional object, where $n$ denotes the dimension of the parameter space, and thus, it may come to perfomance problems since in practise one has to cope with a finite archive. Here we focus on obtaining finite and tight approximations of $E_\\epsilon$, the latter measured by the Hausdorff distance. We propose and investigate a novel archiving strategy theoretically and emp...
Reducing the computational cost of the authentication process in SET protocol
Cristina Satizábal
2010-01-01
Full Text Available SET es un protocolo seguro de pago, con tarjeta de crédito, que proporciona un modelo robusto de seguridad para entregar información personal y financiera a través de Internet, basado en la integridad de los datos, su confidencialidad y la autenticación mutua. Sin embargo, las partes involucradas en una transacción deben llevar a cabo diversas operaciones criptográficas, lo que puede ser un problema cuando se usan dispositivos móviles con baja capacidad de almacenamiento y procesamiento. Este artículo muestra como se puede reducir el coste computacional de SET, mediante el uso de otro protocolo llamado TRUTHC en conjunto con una Infraestructura de Clave Pública (PKI. Los resultados muestran que, usando TRUTHC, el tiempo total de ejecución puede ser reducido un 3% desde el punto de vista del cliente. Esta reducción se mantiene aunque aumente la longitud del camino de certificación.
He, Lian; Chen, Amelia B; Yu, Yi; Kucera, Leah; Tang, Yinjie
2013-10-01
Flue gas from power plants can promote algal cultivation and reduce greenhouse gas emissions(1). Microalgae not only capture solar energy more efficiently than plants(3), but also synthesize advanced biofuels(2-4). Generally, atmospheric CO2 is not a sufficient source for supporting maximal algal growth(5). On the other hand, the high concentrations of CO2 in industrial exhaust gases have adverse effects on algal physiology. Consequently, both cultivation conditions (such as nutrients and light) and the control of the flue gas flow into the photo-bioreactors are important to develop an efficient "flue gas to algae" system. Researchers have proposed different photobioreactor configurations(4,6) and cultivation strategies(7,8) with flue gas. Here, we present a protocol that demonstrates how to use models to predict the microalgal growth in response to flue gas settings. We perform both experimental illustration and model simulations to determine the favorable conditions for algal growth with flue gas. We develop a Monod-based model coupled with mass transfer and light intensity equations to simulate the microalgal growth in a homogenous photo-bioreactor. The model simulation compares algal growth and flue gas consumptions under different flue-gas settings. The model illustrates: 1) how algal growth is influenced by different volumetric mass transfer coefficients of CO2; 2) how we can find optimal CO2 concentration for algal growth via the dynamic optimization approach (DOA); 3) how we can design a rectangular on-off flue gas pulse to promote algal biomass growth and to reduce the usage of flue gas. On the experimental side, we present a protocol for growing Chlorella under the flue gas (generated by natural gas combustion). The experimental results qualitatively validate the model predictions that the high frequency flue gas pulses can significantly improve algal cultivation.
Tsirkunov, Yu. M.; Romanyuk, D. A.
2016-07-01
A dusty gas flow through two, moving and immovable, cascades of airfoils (blades) is studied numerically. In the mathematical model of two-phase gas-particle flow, the carrier gas is treated as a continuum and it is described by the Navier-Stokes equations (pseudo-DNS (direct numerical simulation) approach) or the Reynolds averaged Navier-Stokes (RANS) equations (unsteady RANS approach) with the Menter k-ω shear stress transport (SST) turbulence model. The governing equations in both cases are solved by computational fluid dynamics (CFD) methods. The dispersed phase is treated as a discrete set of solid particles, the behavior of which is described by the generalized kinetic Boltzmann equation. The effects of gas-particle interaction, interparticle collisions, and particle scattering in particle-blade collisions are taken into account. The direct simulation Monte Carlo (DSMC) method is used for computational simulation of the dispersed phase flow. The effects of interparticle collisions and particle scattering are discussed.
Data set for renal sinus fat volume and visceral adipose tissue volume on computed tomography.
Murakami, Yoko; Nagatani, Yukihiro; Takahashi, Masashi; Ikeda, Mitsuru; Miyazawa, Itsuko; Morino, Katsutaro; Ohkubo, Takayoshi; Maegawa, Hiroshi; Nitta, Norihisa; Sakai, Hiroshi; Nota, Hiromitsu; Ushio, Noritoshi; Murata, Kiyoshi
2016-06-01
Renal sinus fat is partially characteristic of peri-vascular adipose tissue, however, RSF volume (RSFV) is associated with visceral adipose tissue (VATV). Therefore, the ratio of RSFV to VATV (RSFV/VATV ratio) can distinguish the importance of RSF as an extension of VAT versus its perivascular effects. We assessed the association of RSFV/VATV ratio with coronary artery calcification score (CACS) in 189 patients with suspected coronary artery disease. RSFV of the right kidney and VATV were quantified by using image data of unenhanced abdominal CT. CACS were measured on unenhanced ECG-gated CT images. This article contains data on explanatory scheme of how to measure RSFV on unenhanced abdominal CT, CT indication and exclusion criteria of study population, sex-adjusted association between RSFV with risk factors of coronary vascular diseases and metabolic indices, multivariate linear regression analysis with CACS as the dependent variable in the total study population. The data are supplemental to our original research article describing detailed association between RSFV/VATV ratio and CACS including sub-groups analyses classified by the age of 70 "Renal sinus fat volume on computed tomography in middle-aged patients at risk for cardiovascular disease and its association with coronary artery calcification" Murakami et al. [1].
In vivo validation of a computationally predicted conserved Ath5 target gene set.
Filippo Del Bene
2007-09-01
Full Text Available So far, the computational identification of transcription factor binding sites is hampered by the complexity of vertebrate genomes. Here we present an in silico procedure to predict target sites of a transcription factor in complex genomes using its binding site. In a first step sequence, comparison of closely related genomes identifies the binding sites in conserved cis-regulatory regions (phylogenetic footprinting. Subsequently, more remote genomes are introduced into the comparison to identify highly conserved and therefore putatively functional binding sites (phylogenetic filtering. When applied to the binding site of atonal homolog 5 (Ath5 or ATOH7, this procedure efficiently filters evolutionarily conserved binding sites out of more than 300,000 instances in a vertebrate genome. We validate a selection of the linked target genes by showing coexpression with and transcriptional regulation by Ath5. Finally, chromatin immunoprecipitation demonstrates the occupancy of the target gene promoters by Ath5. Thus, our procedure, applied to whole genomes, is a fast and predictive tool to in silico filter the target genes of a given transcription factor with defined binding site.
Optimum Setting of Controller Using Soft Computing Techniques for a Chemical System
G. Glandevadhas
2011-01-01
Full Text Available Problem statement: The aim of this study is to present an intelligent tuning technique for PID controller that are simple and still result in good closed loop behavior. The idea is to start with a tuned conventional PID controller, replace it with an equivalent intelligent controllers like Fuzzy, ANN, Genetic and PSO techniques implies fine tuned nonlinear PID controller which is most suitable for nonlinear process like Continuous stirred tank reactor. The performance of various optimization techniques and intelligent techniques are compared. Approach: In this study we present soft computing techniques to design and tune the PID controller. The objective is tominimise the steady state error and to obtain the optimum response. Results: The comparisons amoung the Conventional PID, Fuzzy Sliding PID, Simulated Anneling PID and PSO tuned PID controllers PSO PID implies better result for the nonlinear chemical process. Conclusion: With the nonlinear model of CSTR process the PSO tuned PID controller implies the optimum response for both setpoint and load variations.
Gui, Luying; He, Jian; Qiu, Yudong; Yang, Xiaoping
2017-01-01
This paper presents a variational level set approach to segment lesions with compact shapes on medical images. In this study, we investigate to address the problem of segmentation for hepatocellular carcinoma which are usually of various shapes, variable intensities, and weak boundaries. An efficient constraint which is called the isoperimetric constraint to describe the compactness of shapes is applied in this method. In addition, in order to ensure the precise segmentation and stable movement of the level set, a distance regularization is also implemented in the proposed variational framework. Our method is applied to segment various hepatocellular carcinoma regions on Computed Tomography images with promising results. Comparison results also prove that the proposed method is more accurate than other two approaches.
3D printing of preclinical X-ray computed tomographic data sets.
Doney, Evan; Krumdick, Lauren A; Diener, Justin M; Wathen, Connor A; Chapman, Sarah E; Stamile, Brian; Scott, Jeremiah E; Ravosa, Matthew J; Van Avermaete, Tony; Leevy, W Matthew
2013-03-22
Three-dimensional printing allows for the production of highly detailed objects through a process known as additive manufacturing. Traditional, mold-injection methods to create models or parts have several limitations, the most important of which is a difficulty in making highly complex products in a timely, cost-effective manner.(1) However, gradual improvements in three-dimensional printing technology have resulted in both high-end and economy instruments that are now available for the facile production of customized models.(2) These printers have the ability to extrude high-resolution objects with enough detail to accurately represent in vivo images generated from a preclinical X-ray CT scanner. With proper data collection, surface rendering, and stereolithographic editing, it is now possible and inexpensive to rapidly produce detailed skeletal and soft tissue structures from X-ray CT data. Even in the early stages of development, the anatomical models produced by three-dimensional printing appeal to both educators and researchers who can utilize the technology to improve visualization proficiency. (3, 4) The real benefits of this method result from the tangible experience a researcher can have with data that cannot be adequately conveyed through a computer screen. The translation of pre-clinical 3D data to a physical object that is an exact copy of the test subject is a powerful tool for visualization and communication, especially for relating imaging research to students, or those in other fields. Here, we provide a detailed method for printing plastic models of bone and organ structures derived from X-ray CT scans utilizing an Albira X-ray CT system in conjunction with PMOD, ImageJ, Meshlab, Netfabb, and ReplicatorG software packages.
Long, Jinyi; Yu, Zhuliang
2010-01-01
Parameter setting plays an important role for improving the performance of a brain computer interface (BCI). Currently, parameters (e.g. channels and frequency band) are often manually selected. It is time-consuming and not easy to obtain an optimal combination of parameters for a BCI. In this paper, motor imagery-based BCIs are considered, in which channels and frequency band are key parameters. First, a semi-supervised support vector machine algorithm is proposed for automatically selecting a set of channels with given frequency band. Next, this algorithm is extended for joint channel-frequency selection. In this approach, both training data with labels and test data without labels are used for training a classifier. Hence it can be used in small training data case. Finally, our algorithms are applied to a BCI competition data set. Our data analysis results show that these algorithms are effective for selection of frequency band and channels when the training data set is small. PMID:21886673
CAPA (Computer-Assisted Personalized Assignments) in a large university setting
Pascarella, Andrea M.
A systematic study of the online homework system CAPA (Computer-Assisted Personalized Assignments) was carried out in the calculus-based introductory physics course at the University of Colorado, Boulder during the fall 2001 semester (N ≈ 500). This study looked at the effects CAPA had on student learning and attitudes. The students in this class were split into two groups. One group was initially assigned to CAPA; the other group was assigned to traditional homework. At mid-semester the groups switched identities (the students who began the course using CAPA had to complete traditional homework). Exam scores and Force and Motion Concept Evaluation gains showed no statistically significant differences between the groups. Written quizzes and exams were collected from a smaller sample of students and analyzed using a problem-solving rubric. No statistically significant differences in the problem solving abilities of the groups were seen. Student opinions about the effect each homework type had on their learning were elicited. Students with non-expert-like epistemologies felt that CAPA was a better learning tool while students with expert-like epistemologies believed that traditional homework was a better learning tool. Problem solving interviews were conducted weekly with 9 students. From the analysis of this data a problem solving characterization of students using CAPA and traditional homework was inferred. Four types of problems solvers emerged---the CAPA Thinker, Traditional Thinker, CAPA Guesser, and Traditional Guesser. Thinkers tend to have expert-like epistemological beliefs. Guessers generally have non-expert-like epistemologies. On quantitative problems traditional homework promoted metacognitive processes in the Traditional Thinker and CAPA hindered self-evaluation among CAPA Thinkers. On qualitative problems, the opposite was observed to occur. When the students switched homework types at mid-semester it was expected that CAPA Thinkers would become
Zhou, Chuan; Chan, Heang-Ping; Sahiner, Berkman; Hadjiiski, Lubomir M.; Chughtai, Aamer; Patel, Smita; Wei, Jun; Cascade, Philip N.; Kazerooni, Ella A.
2009-02-01
Computed tomographic pulmonary angiography (CTPA) has been reported to be an effective means for clinical diagnosis of pulmonary embolism (PE). We are developing a computer-aided diagnosis (CAD) system for assisting radiologists in detection of pulmonary embolism in CTPA images. The pulmonary vessel tree is extracted based on the analysis of eigenvalues of Hessian matrices at multiple scales followed by 3D hierarchical EM segmentation. A multiprescreening method is designed to identify suspicious PEs along the extracted vessels. A linear discriminant analysis (LDA) classifier with feature selection is then used to reduce false positives (FPs). Two data sets of 59 and 69 CTPA PE cases were randomly selected from patient files at the University of Michigan (UM) and the PIOPED II study, respectively, and used as independent training and test sets. The PEs that were identified by three experienced thoracic radiologists were used as the gold standard. The detection performance of the CAD system was assessed by free response receiver operating characteristic analysis. The results indicated that our PE detection system can achieve a sensitivity of 80% at 18.9 FPs/case on the PIOPED cases when the LDA classifier was trained with the UM cases. The test sensitivity with the UM cases is 80% at 22.6 FPs/cases when the LDA classifier was trained with the PIOPED cases.
Schumann, Anja; John, Ulrich; Baumeister, Sebastian E; Ulbricht, Sabina; Rumpf, Hans-Jürgen; Meyer, Christian
2008-02-01
This study reports the outcome of a randomized controlled trial testing a computer-tailored smoking cessation intervention based on the transtheoretical model in a general population setting in Germany. Participants of the smoking intervention study were recruited from an existing general population health examination survey in a university hospital. The sample consisted of 611 current and former smokers at baseline, and of 485 participants in the core group of baseline daily cigarette smokers. Follow-ups were conducted 6, 12, 18, and 24 months after baseline. The intervention was designed for both current and former smokers, involved up to three individualized feedback letters, and was created using expert-system technology. Based on 7-day point-prevalence abstinence and 6-month prolonged abstinence as the outcome measures, the study identified no significant differences between the intervention and control groups. Modeling the full longitudinal data in generalized estimation equation analyses, using different nonresponse procedures, and adjusting for covariates did not alter the results. We conclude that the computer-tailored transtheoretical model-based smoking cessation intervention, as delivered in this study and in this special setting, was ineffective.
合理设置计算机配棉系统%Reasonable set up the computer with cotton system
康强
2014-01-01
原棉占棉纱生产成本的70%左右，合理配棉是纺织企业保证产品质量的前提，本文介绍了我国计算机自动配棉的历史和现状，分析棉花质量检验体制改革对棉纺企业的影响，分析了计算机配棉系统的功能，自动配棉的流程。根据原料情况合理设置计算机配棉系统，是棉纺企业提高生产效率，降低生产成本的关键。%production cost of raw cotton yarn around 70%, reasonable cotton textile enterprises products without sacrificing quality, this paper introduces the history and current situation of computer automatic cotton, analysing the impact of cotton quality inspection system for cotton mill, cotton analyzes the computer and distribution system features automatic cotton-blending process. According to the raw material situation and reasonable set cotton distribution system for computers, is a cotton spinning enterprises to improve production efficiency, the key to reducing production costs.
P. MacBride
The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...
K. Ide
2002-01-01
Full Text Available In this paper we develop analytical and numerical methods for finding special hyperbolic trajectories that govern geometry of Lagrangian structures in time-dependent vector fields. The vector fields (or velocity fields may have arbitrary time dependence and be realized only as data sets over finite time intervals, where space and time are discretized. While the notion of a hyperbolic trajectory is central to dynamical systems theory, much of the theoretical developments for Lagrangian transport proceed under the assumption that such a special hyperbolic trajectory exists. This brings in new mathematical issues that must be addressed in order for Lagrangian transport theory to be applicable in practice, i.e. how to determine whether or not such a trajectory exists and, if it does exist, how to identify it in a sequence of instantaneous velocity fields. We address these issues by developing the notion of a distinguished hyperbolic trajectory (DHT. We develop an existence criteria for certain classes of DHTs in general time-dependent velocity fields, based on the time evolution of Eulerian structures that are observed in individual instantaneous fields over the entire time interval of the data set. We demonstrate the concept of DHTs in inhomogeneous (or "forced" time-dependent linear systems and develop a theory and analytical formula for computing DHTs. Throughout this work the notion of linearization is very important. This is not surprising since hyperbolicity is a "linearized" notion. To extend the analytical formula to more general nonlinear time-dependent velocity fields, we develop a series of coordinate transforms including a type of linearization that is not typically used in dynamical systems theory. We refer to it as Eulerian linearization, which is related to the frame independence of DHTs, as opposed to the Lagrangian linearization, which is typical in dynamical systems theory, which is used in the computation of Lyapunov exponents. We
Hassan Khassehkhan
2016-09-01
Full Text Available We study a previously introduced mathematical model of amensalistic control of the foodborne pathogen Listeria monocytogenes by the generally regarded as safe lactic acid bacteria Lactococcus lactis in a chemostat setting under nutrient rich growth conditions. The control agent produces lactic acids and thus affects pH in the environment such that it becomes detrimental to the pathogen while it is much more tolerant to these self-inflicted environmental changes itself. The mathematical model consists of five nonlinear ordinary differential equations for both bacterial species, the concentration of lactic acids, the pH and malate. The model is algebraically too involved to allow a comprehensive, rigorous qualitative analysis. Therefore, we conduct a computational study. Our results imply that depending on the growth characteristics of the medium in which the bacteria are cultured, the pathogen can survive in an intermediate flow regime but will be eradicated for slower flow rates and washed out for higher flow rates.
Luís Ronan Marquez Ferreira de Souza
2007-03-01
Full Text Available CONTEXT AND OBJECTIVE: Recent studies have shown noncontrast computed tomography (NCT to be more effective than ultrasound (US for imaging acute ureterolithiasis. However, to our knowledge, there are few studies directly comparing these techniques in an emergency teaching hospital setting. The objectives of this study were to compare the diagnostic accuracy of US and NCT performed by senior radiology residents for diagnosing acute ureterolithiasis; and to assess interobserver agreement on tomography interpretations by residents and experienced abdominal radiologists. DESIGN AND SETTING: Prospective study of 52 consecutive patients, who underwent both US and NCT within an interval of eight hours, at Hospital São Paulo. METHODS: US scans were performed by senior residents and read by experienced radiologists. NCT scan images were read by senior residents, and subsequently by three abdominal radiologists. The interobserver variability was assessed using the kappa statistic. RESULTS: Ureteral calculi were found in 40 out of 52 patients (77%. US presented sensitivity of 22% and specificity of 100%. When collecting system dilatation was associated, US demonstrated 73% sensitivity, 82% specificity. The interobserver agreement in NCT analysis was very high with regard to identification of calculi, collecting system dilatation and stranding of perinephric fat. CONCLUSIONS: US has limited value for identifying ureteral calculi in comparison with NCT, even when collecting system dilatation is present. Residents and abdominal radiologists demonstrated excellent agreement rates for ureteral calculi, identification of collecting system dilatation and stranding of perinephric fat on NCT.
Bieberle, M; Hampel, U
2015-06-13
Tomographic image reconstruction is based on recovering an object distribution from its projections, which have been acquired from all angular views around the object. If the angular range is limited to less than 180° of parallel projections, typical reconstruction artefacts arise when using standard algorithms. To compensate for this, specialized algorithms using a priori information about the object need to be applied. The application behind this work is ultrafast limited-angle X-ray computed tomography of two-phase flows. Here, only a binary distribution of the two phases needs to be reconstructed, which reduces the complexity of the inverse problem. To solve it, a new reconstruction algorithm (LSR) based on the level-set method is proposed. It includes one force function term accounting for matching the projection data and one incorporating a curvature-dependent smoothing of the phase boundary. The algorithm has been validated using simulated as well as measured projections of known structures, and its performance has been compared to the algebraic reconstruction technique and a binary derivative of it. The validation as well as the application of the level-set reconstruction on a dynamic two-phase flow demonstrated its applicability and its advantages over other reconstruction algorithms. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Matthias Kasemann
Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...
K. ten Haaf (Kevin); M.C. Tammemagi (Martin); Bondy, S.J. (Susan J.); C.M. van der Aalst (Carlijn); Gu, S. (Sumei); McGregor, S.E. (S. Elizabeth); Nicholas, G. (Garth); H.J. de Koning (Harry); L.F. Paszat (Lawrence F.)
2017-01-01
textabstractBackground: The National Lung Screening Trial (NLST) results indicate that computed tomography (CT) lung cancer screening for current and former smokers with three annual screens can be cost-effective in a trial setting. However, the cost-effectiveness in a population-based setting with
Hong, Changjin; Manimaran, Solaiappan; Johnson, William Evan
2014-01-01
Quality control and read preprocessing are critical steps in the analysis of data sets generated from high-throughput genomic screens. In the most extreme cases, improper preprocessing can negatively affect downstream analyses and may lead to incorrect biological conclusions. Here, we present PathoQC, a streamlined toolkit that seamlessly combines the benefits of several popular quality control software approaches for preprocessing next-generation sequencing data. PathoQC provides a variety of quality control options appropriate for most high-throughput sequencing applications. PathoQC is primarily developed as a module in the PathoScope software suite for metagenomic analysis. However, PathoQC is also available as an open-source Python module that can run as a stand-alone application or can be easily integrated into any bioinformatics workflow. PathoQC achieves high performance by supporting parallel computation and is an effective tool that removes technical sequencing artifacts and facilitates robust downstream analysis. The PathoQC software package is available at http://sourceforge.net/projects/PathoScope/.
Finley, Gail T.
1988-01-01
This report covers the study of the relational database implementation in the NASCAD computer program system. The existing system is used primarily for computer aided design. Attention is also directed to a hidden-surface algorithm for final drawing output.
Stratospheric age of air computed with trajectories based on various 3D-Var and 4D-Var data sets
M. P. Scheele
2005-01-01
Full Text Available The age of stratospheric air is computed with a trajectory model, using ECMWF ERA-40 3D-Var and operational 4D-Var winds. Analysis as well as forecast data are used. In the latter case successive forecast segments are put together to get a time series of the wind fields. This is done for different forecast segment lengths. The sensitivity of the computed age to the forecast segment length and assimilation method are studied, and the results are compared with observations and with results from a chemistry transport model that uses the same data sets. A large number of backward trajectories are started in the stratosphere, and from the fraction of these trajectories that has passed the tropopause the age of air is computed. First, for ten different data sets 50-day backward trajectories starting in the tropical lower stratosphere are computed. The results show that in this region the computed cross-tropopause transport decreases with increasing forecast segment length. Next, for three selected data sets (3D-Var 24-h and 4D-Var 72-h forecast segments, and 4D-Var analyses 5-year backward trajectories are computed that start all over the globe at an altitude of 20km. For all data sets the computed ages of air in the extratropics are smaller than the observation-based age. For 4D-Var forecast series they are closest to the observation-based values, but still 0.5-1.5 year too small. Compared to the difference in age between the results for the different data sets, the difference in age between the trajectory and the chemistry transport model results is small.
Hsu, Christina M. L. [Department of Biomedical Engineering, Duke University, Durham, North Carolina 27708 and Carl E. Ravin Advanced Imaging Laboratories, Duke University Medical Center, Durham, North Carolina 27705 (United States); Palmeri, Mark L. [Department of Biomedical Engineering, Duke University, Durham, North Carolina 27708 (United States); Department of Anesthesiology, Duke University Medical Center, Durham, North Carolina 27710 (United States); Segars, W. Paul [Carl E. Ravin Advanced Imaging Laboratories, Duke University Medical Center, Durham, North Carolina 27705 (United States); Department of Radiology, Duke University Medical Center, Durham, North Carolina 27710 (United States); Department of Biomedical Engineering, Duke University, Durham, North Carolina 27708 (United States); Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Veress, Alexander I. [Department of Mechanical Engineering, University of Washington, Seattle, Washington 98195 (United States); Dobbins, James T. III [Carl E. Ravin Advanced Imaging Laboratories, Duke University Medical Center, Durham, North Carolina 27705 (United States); Department of Radiology, Duke University Medical Center, Durham, North Carolina 27710 (United States); Department of Biomedical Engineering, Duke University, Durham, North Carolina 27708 (United States); Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Department of Physics, Duke University, Durham, North Carolina 27708 (United States)
2013-04-15
Purpose: The authors previously reported on a three-dimensional computer-generated breast phantom, based on empirical human image data, including a realistic finite-element based compression model that was capable of simulating multimodality imaging data. The computerized breast phantoms are a hybrid of two phantom generation techniques, combining empirical breast CT (bCT) data with flexible computer graphics techniques. However, to date, these phantoms have been based on single human subjects. In this paper, the authors report on a new method to generate multiple phantoms, simulating additional subjects from the limited set of original dedicated breast CT data. The authors developed an image morphing technique to construct new phantoms by gradually transitioning between two human subject datasets, with the potential to generate hundreds of additional pseudoindependent phantoms from the limited bCT cases. The authors conducted a preliminary subjective assessment with a limited number of observers (n= 4) to illustrate how realistic the simulated images generated with the pseudoindependent phantoms appeared. Methods: Several mesh-based geometric transformations were developed to generate distorted breast datasets from the original human subject data. Segmented bCT data from two different human subjects were used as the 'base' and 'target' for morphing. Several combinations of transformations were applied to morph between the 'base' and 'target' datasets such as changing the breast shape, rotating the glandular data, and changing the distribution of the glandular tissue. Following the morphing, regions of skin and fat were assigned to the morphed dataset in order to appropriately assign mechanical properties during the compression simulation. The resulting morphed breast was compressed using a finite element algorithm and simulated mammograms were generated using techniques described previously. Sixty-two simulated mammograms
Andersen, Erling B.
A computer program for solving the conditional likelihood equations arising in the Rasch model for questionnaires is described. The estimation method and the computational problems involved are described in a previous research report by Andersen, but a summary of those results are given in two sections of this paper. A working example is also…
Mechling, Linda C.; Ortega-Hurndon, Fanny
2007-01-01
This study evaluated the effectiveness of computer-based video instruction (CBVI) to teach three young adults with moderate intellectual disabilities to perform complex, multiple step, job tasks in a generalized setting. A multiple probe design across three job tasks and replicated across three students was used to evaluate the effectiveness of…
Siegelmann-Danieli, Nava; Farkash, Ariel; Katzir, Itzhak; Vesterman Landes, Janet; Rotem Rabinovich, Hadas; Lomnicky, Yossef; Carmeli, Boaz; Parush-Shear-Yashuv, Naama
2016-01-01
Background Randomized clinical trials constitute the gold-standard for evaluating new anti-cancer therapies; however, real-life data are key in complementing clinically useful information. We developed a computational tool for real-life data analysis and applied it to the metastatic colorectal cancer (mCRC) setting. This tool addressed the impact of oncology/non-oncology parameters on treatment patterns and clinical outcomes. Methods The developed tool enables extraction of any computerized information including comorbidities and use of drugs (oncological/non-oncological) per individual HMO member. The study in which we evaluated this tool was a retrospective cohort study that included Maccabi Healthcare Services members with mCRC receiving bevacizumab with fluoropyrimidines (FP), FP plus oxaliplatin (FP-O), or FP plus irinotecan (FP-I) in the first-line between 9/2006 and 12/2013. Results The analysis included 753 patients of whom 15.4% underwent subsequent metastasectomy (the Surgery group). For the entire cohort, median overall survival (OS) was 20.5 months; in the Surgery group, median duration of bevacizumab-containing therapy (DOT) pre-surgery was 6.1 months; median OS was not reached. In the Non-surgery group, median OS and DOT were 18.7 and 11.4 months, respectively; no significant OS differences were noted between FP-O and FP-I, whereas FP use was associated with shorter OS (12.3 month; p <0.002; notably, these patients were older). Patients who received both FP-O- and FP-I-based regimens achieved numerically longer OS vs. those who received only one of these regimens (22.1 [19.9–24.0] vs. 18.9 [15.5–21.9] months). Among patients assessed for wild-type KRAS and treated with subsequent anti-EGFR agent, OS was 25.4 months and 18.7 months for 124 treated vs. 37 non-treated patients (non-significant). Cox analysis (controlling for age and gender) identified several non-oncology parameters associated with poorer clinical outcomes including concurrent use of
Nava Siegelmann-Danieli
Full Text Available Randomized clinical trials constitute the gold-standard for evaluating new anti-cancer therapies; however, real-life data are key in complementing clinically useful information. We developed a computational tool for real-life data analysis and applied it to the metastatic colorectal cancer (mCRC setting. This tool addressed the impact of oncology/non-oncology parameters on treatment patterns and clinical outcomes.The developed tool enables extraction of any computerized information including comorbidities and use of drugs (oncological/non-oncological per individual HMO member. The study in which we evaluated this tool was a retrospective cohort study that included Maccabi Healthcare Services members with mCRC receiving bevacizumab with fluoropyrimidines (FP, FP plus oxaliplatin (FP-O, or FP plus irinotecan (FP-I in the first-line between 9/2006 and 12/2013.The analysis included 753 patients of whom 15.4% underwent subsequent metastasectomy (the Surgery group. For the entire cohort, median overall survival (OS was 20.5 months; in the Surgery group, median duration of bevacizumab-containing therapy (DOT pre-surgery was 6.1 months; median OS was not reached. In the Non-surgery group, median OS and DOT were 18.7 and 11.4 months, respectively; no significant OS differences were noted between FP-O and FP-I, whereas FP use was associated with shorter OS (12.3 month; p <0.002; notably, these patients were older. Patients who received both FP-O- and FP-I-based regimens achieved numerically longer OS vs. those who received only one of these regimens (22.1 [19.9-24.0] vs. 18.9 [15.5-21.9] months. Among patients assessed for wild-type KRAS and treated with subsequent anti-EGFR agent, OS was 25.4 months and 18.7 months for 124 treated vs. 37 non-treated patients (non-significant. Cox analysis (controlling for age and gender identified several non-oncology parameters associated with poorer clinical outcomes including concurrent use of diuretics and proton
Zhou, Chuan; Chan, Heang-Ping; Chughtai, Aamer; Kuriakose, Jean W.; Kazerooni, Ella A.; Hadjiiski, Lubomir M.; Wei, Jun; Patel, Smita
2015-03-01
We have developed a computer-aided detection (CAD) system for assisting radiologists in detection of pulmonary embolism (PE) in computed tomographic pulmonary angiographic (CTPA) images. The CAD system includes stages of pulmonary vessel segmentation, prescreening of PE candidates and false positive (FP) reduction to identify suspicious PEs. The system was trained with 59 CTPA PE cases collected retrospectively from our patient files (UM set) with IRB approval. Five feature groups containing 139 features that characterized the intensity texture, gradient, intensity homogeneity, shape, and topology of PE candidates were initially extracted. Stepwise feature selection guided by simplex optimization was used to select effective features for FP reduction. A linear discriminant analysis (LDA) classifier was formulated to differentiate true PEs from FPs. The purpose of this study is to evaluate the performance of our CAD system using an independent test set of CTPA cases. The test set consists of 50 PE cases from the PIOPED II data set collected by multiple institutions with access permission. A total of 537 PEs were manually marked by experienced thoracic radiologists as reference standard for the test set. The detection performance was evaluated by freeresponse receiver operating characteristic (FROC) analysis. The FP classifier obtained a test Az value of 0.847 and the FROC analysis indicated that the CAD system achieved an overall sensitivity of 80% at 8.6 FPs/case for the PIOPED test set.
M. Kasemann
Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...
M. Kasemann
Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...
I. Fisk
2011-01-01
Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...
P. McBride
The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...
Collis, Betty; Margaryan, Anoush
2004-01-01
Business needs in many corporations call for learning outcomes that involve problem solutions, and creating and sharing new knowledge within worksplace situation that may involve collaboration among members of a team. We argue that work-based activities (WBA) and computer-supported collaborative lea
Collis, Betty; Margaryan, Anoush
2004-01-01
Business needs in many corporations call for learning outcomes that involve problem solutions, and creating and sharing new knowledge within workplace situations that may involve collaboration among members of a team. We argue that work-based activities (WBA) and computer-supported collaborative learning (CSCL) are appropriate components for…
I. Fisk
2013-01-01
Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...
Atkinson, Paul
2011-01-01
The pixelated rectangle we spend most of our day staring at in silence is not the television as many long feared, but the computer-the ubiquitous portal of work and personal lives. At this point, the computer is almost so common we don't notice it in our view. It's difficult to envision that not that long ago it was a gigantic, room-sized structure only to be accessed by a few inspiring as much awe and respect as fear and mystery. Now that the machine has decreased in size and increased in popular use, the computer has become a prosaic appliance, little-more noted than a toaster. These dramati
计算平面点集凸包的实时插入算法%Real-time Insert Algorithm for Computing Convex Hull of Finite Planar Sets
刘萍
2013-01-01
讨论平面点集的凸包实时插入算法.算法基于Graham扫描算法,对3个点检测顺序的转向.本文证明,当S的N个点以流的形式进入系统,计算S的凸包所需的检测次数小于3N.%In this paper, based on Graham scan algorithm, a real-time algorithm for computing convex hull of a finite planar set is proposed to check the sequential turn of 3 points. If N points of S are to be computed as stream, the number of tests for computing convex hull of S is lower than 3N.
陈波
2011-01-01
组合服务的接口交互行为的分析和检测是服务计算领域的一个重要课题。本文以接口自动机为组合服务接口模型,引入组合环境的因素,提出了在给定的组合环境下服务接口交互强弱相容性的概念,并给出了相容性判定的判据表达式。通过遍历组合服务接口模型进行可达性分析,并通过检测判据表达式是否满足来判定服务接口交互的相容性,同时实现了服务与环境交互相容性的判定。%Analysis and verification of composite services interacting with an interface is an important issue in service computing.The interface automata are taken as the model of composite services interacting in this paper.By introducing an environment factor into analysis,the concept of strong and weak compatibility of service interacting under specific environment is proposed,and the expression of criterion for compatibility checking is presented.In order to check the compatibility of service interfaces,the interacting model of composite services is traversed with the reachable analysis,and the compatibility of service with environment has been checked.
Jacobus A Venter
2002-04-01
Full Text Available A computer-based teaching programme (CBTP was developed after a comprehensive review of the literature with regard to learning, learning theories, traditional and student-centred styles and approaches, teaching through hypermedia and computer-based teaching. Opsomming ’n Rekenaargebaseerde onderrigprogram (RGOP is na ’n omvattende literatuurstudie oor leer, leerteorieë, -style en benaderings, tradisionele-, studentgesentreerde- en onderrig deur hipermedia, en rekenaargebaseerde onderrig ontwikkel, met die doel om ’n onderrigbenadering te implementeer waar studentverpleegkundiges die geleentheid gegun word om verantwoordelikheid vir leer te aanvaar, leerbehoeftes te identifiseer en ’n diep benadering tot leer te volg. *Please note: This is a reduced version of the abstract. Please refer to PDF for full text.
I. Fisk
2010-01-01
Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...
M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley
Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...
Farah Ahmad
2010-06-01
Full Text Available
Background: Intimate partner violence is a major public health issue, particularly among women. Abused women experience many acute and chronic health consequences resulting in frequent healthcare visits. There exists a system-level opportunity to intervene, yet abused women refrain from spontaneous disclosure of their experiences of victimization due to embarrassment. Meanwhile providers often fail to ask due to lack of time, priority of acute medical problems and discomfort. Missed opportunities to detect intimate partner violence and control (IPVC can be availed by computer-assisted interactive screening.
Aim: The purpose of this paper is to critically review current scientific knowledge on the use of enhanced Web 2.0 interactive computer-assisted screening for IPVC in clinical settings.
Methods: A systematic review of peer-reviewed published literature was conducted using Medline and PsychInfo data bases from 1996 to 2010. Eligibility criteria were applied to the identified records. Additional studies were identified by searching reference list and contacting authors. Eight eligible studies were appraised for the study characteristics and IPVC related outcomes for the process-of-care, patient, and provider.
Results: The selected studies (descriptive, randomized trial, and qualitative were conducted in the emergency and family medicine settings on two programs of research which used similar interactive computer screen, Promote Health. The reviewed evidence supports the effectiveness of computer screening for improving provider-patient communication on IPVC in both settings and compromised mental health in family medicine. However the management of detected cases of IPVC by time-pressed frontline clinicians needs a more supportive environment. The need for such system-level support is greater for the emergency setting
Lucena, Joaquin; Mora, Esther; Rodriguez, Lucia; Muñoz, Mariela; Cantin, Mario G; Fonseca, Gabriel M
2016-09-01
To confirm the nature and forensic significance of questioned skeletal material submitted a medico-legal setting is a relatively common procedure, although not without difficulties when the remains are fragmented or burned. Different methodologies have been described for this purpose, many of them invasive, time and money consuming or dependent on the availability of the analytical instrument. We present a case in which skeletal material with unusual conditions of preservation and curious discovery was sent to a medico-legal setting to determine its human/nonhuman origin. A combined strategy of imagenological procedures (macroscopic, radiographic and cone beam computed tomography - CBCT-technology) was performed as non-invasive and rapid methods to assess the nonhuman nature of the material, specifically of pig (Sus scrofa) origin. This hypothesis was later confirmed by DNA analysis. CBCT data sets provide accurate three-dimensional reconstructions, which demonstrate its reliable use as a forensic tool.
Shahmohammadi Beni, Mehrdad; Yu, K N
2015-12-14
A promising application of plasma medicine is to treat living cells and tissues with cold plasma. In cold plasmas, the fraction of neutrals dominates, so the carrier gas could be considered the main component. In many realistic situations, the treated cells are covered by a fluid. The present paper developed models to determine the temperature of the fluid at the positions of the treated cells. Specifically, the authors developed a three-phase-interaction model which was coupled with heat transfer to examine the injection of the helium carrier gas into water and to investigate both the fluid dynamics and heat transfer output variables, such as temperature, in three phases, i.e., air, helium gas, and water. Our objective was to develop a model to perform complete fluid dynamics and heat transfer computations to determine the temperature at the surface of living cells. Different velocities and plasma temperatures were also investigated using finite element method, and the model was built using the comsol multiphysics software. Using the current model to simulate plasma injection into such systems, the authors were able to investigate the temperature distributions in the domain, as well as the surface and bottom boundary of the medium in which cells were cultured. The temperature variations were computed at small time intervals to analyze the temperature increase in cell targets that could be highly temperature sensisitve. Furthermore, the authors were able to investigate the volume of the plasma plume and its effects on the average temperature of the medium layer/domain. Variables such as temperature and velocity at the cell layer could be computed, and the variations due to different plume sizes could be determined. The current models would be very useful for future design of plasma medicine devices and procedures involving cold plasmas.
I. Fisk
2010-01-01
Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...
P. McBride
It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...
M. Kasemann
Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...
M. Kasemann
CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes. Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...
I. Fisk
2011-01-01
Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...
I. Fisk
2012-01-01
Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...
M. Kasemann
Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...
The Study of Reachable Range for Urban Pedestrian System on Early Peak%城市早高峰步行可达范围研究
郑柯; 吴玮
2013-01-01
根据步行与其他交通方式对比的优势距离及对城市早高峰居民步行出行实际调查数据,研究城市步行出行的可达范围.%This paper studies the actual survey data of urban residents'travelling by walking, bus and bicycle. It analyses the travel time and route on foot comparing with bus and bicycle. According to the study it puts forward the relationship of urban tarvel time among walking, bus and bicycle. Then it i-dentifies the reachable range of urban travel by walking on early peak. All these research findings provide the theory reference for transportation planning of urban pedestrian system.
Iriki, Atsushi
2016-03-01
"Language-READY brain" in the title of this article [1] seems to be the expression that the author prefers to use to illustrate his theoretical framework. The usage of the term "READY" appears to be of extremely deep connotation, for three reasons. Firstly, of course it needs a "principle" - the depth and the width of the computational theory depicted here is as expected from the author's reputation. However, "readiness" implies that it is much more than just "a theory". That is, such a principle is not static, but it rather has dynamic properties, which are ready to gradually proceed to flourish once brains are put in adequate conditions to make time progressions - namely, evolution and development. So the second major connotation is that this article brought in the perspectives of the comparative primatology as a tool to relativise the language-realizing human brains among other animal species, primates in particular, in the context of evolutionary time scale. The tertiary connotation lies in the context of the developmental time scale. The author claims that it is the interaction of the newborn with its care takers, namely its mother and other family or social members in its ecological conditions, that brings the brain mechanism subserving language faculty to really mature to its final completion. Taken together, this article proposes computational theories and mechanisms of Evo-Devo-Eco interactions for language acquisition in the human brains.
Dealing with typical values via Atanassov's intuitionistic fuzzy sets
Szmidt, Eulalia; Kacprzyk, Janusz
2010-07-01
This paper is an improved and extended version of our previous work2 on typicality in terms of Atanassov's intuitionistic fuzzy sets (to be called A-IFSs, for short)3. We follow the line of reasoning known from psychological and cognitive sciences, in particular from linguistic experiments, and verify how those results work in the case of classification - a typical problem in computer science, decision sciences, etc. Our considerations concentrate on a typical example discussed in cognitive sciences - we investigate to which extent a linguistic representation in a psychological space (we start from nominal data - names are assigned to objects as labels) succeeds in predicting categories via A-IFSs. First, we consider a model of categories with a geometrical centroid model in which the similarity is defined in terms of a distance to centroids. Next, we verify if the extreme ideals, which are important in cognitive processes when categories are learnt in the presence of the alternative (contrast) category, give comparative results. Finally, we discuss if the 'reachable extreme ideals' and 'dominating frequency centres' give comparative results. We show that A-IFSs make it possible to reflect a positive and negative information via the concept of membership and non-membership. Although the paper presents ongoing research, the results obtained are promising and point out the usefulness and strength of A-IFSs as a tool to account for more aspects of vague data and information. Based on 'On Some Typical Values for A-IFS', by E. Szmidt and J. Kacprzyk which appeared in the Proceedings of the 4th International IEEE Conference on Intelligent Systems IS'08, pp. 13-2-13-7. There is currently a discussion on the appropriateness of the name IFS introduced by Dubois et al. (2005), and also Atanassov's (2005) response. This is, however, beyond the scope of this paper which will not be dealing with this issue.
Guru Prasad M S
2017-01-01
Full Text Available The Huge amount of Big Data is constantly arriving with the rapid development of business organizations and they are interested in extracting knowledgeable information from collected data. Frequent item mining of Big Data helps with business decision and to provide high quality service. The result of traditional frequent item set mining algorithm on Big Data is not an effective way which leads to high computation time. An Apache Hadoop MapReduce is the most popular data intensive distributed computing framework for large scale data applications such as data mining. In this paper, the author identifies the factors affecting on the performance of frequent item mining algorithm based on Hadoop MapReduce technology and proposed an approach for optimizing the performance of large scale frequent item set mining. The Experiments result shows the potential of the proposed approach. Performance is significantly optimized for large scale data mining in MapReduce technique. The author believes that it has a valuable contribution in the high performance computing of Big Data
Ali Dashti
Full Text Available This paper presents an implementation of the brute-force exact k-Nearest Neighbor Graph (k-NNG construction for ultra-large high-dimensional data cloud. The proposed method uses Graphics Processing Units (GPUs and is scalable with multi-levels of parallelism (between nodes of a cluster, between different GPUs on a single node, and within a GPU. The method is applicable to homogeneous computing clusters with a varying number of nodes and GPUs per node. We achieve a 6-fold speedup in data processing as compared with an optimized method running on a cluster of CPUs and bring a hitherto impossible [Formula: see text]-NNG generation for a dataset of twenty million images with 15 k dimensionality into the realm of practical possibility.
Yoshikawa, Shushi; Okada, Masahiro; Kondo, Hiroshi; Sou, Hironobu; Murakami, Takamichi; Kanematsu, Masayuki; Ichikawa, Tomoaki; Hayakawa, Akiko; Shiosakai, Kazuhito; Awai, Kazuo; Yoshimitsu, Kengo; Yamashita, Yasuyuki
2014-08-01
Alongside current improvements in the performance of computer tomography (CT) systems, there has been an increase in the use of bolus tracking (BT) to acquire arterial dominant phase images for dynamic CT at optimal timing for characterization of liver focal lesions. However, optimal BT settings have not been established. In the present study, methods of contrast enhancement and BT setting values were evaluated using a multicenter post-marketing surveillance study on contrast media used in patients with chronic hepatitis and/or cirrhosis who had undergone liver dynamic CT for diagnosis of hepatocellular carcinoma, conducted by Daiichi Sankyo Co., Ltd. The results suggested the contrast injection method to be clinically useful if the amount of iodine per kilogram of body weight is set at 600 mg/kg and the injection duration at 30 s. To achieve a good arterial dominant scan under conditions where the injection duration is fixed at 30 s or the average injection duration is 34 s using the fixed injection rate method, the scan delay time should ideally to be set to longer than 13 s. If using the BT method, we recommend that the BT settings should be revalidated in reference to our results.
M. Kasemann
Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...
Contributions from I. Fisk
2012-01-01
Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences. Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...
I. Fisk
2012-01-01
Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently. Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...
2010-01-01
Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...
I. Fisk
2013-01-01
Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites. Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month. Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB. Figure 3: The volume of data moved between CMS sites in the last six months The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...
A Method of Computing Minimal Hitting Sets Using CSP%利用CSP求解极小碰集的方法
王艺源; 欧阳丹彤; 张立明; 张永刚
2015-01-01
基于模型诊断是人工智能领域中具有挑战性的问题，包含了很多人工智能中的关键问题，其研究对整个人工智能领域起着重要推动作用。在基于模型诊断中，候选诊断结果通常由所有极小冲突集对应的所有极小碰集所描述，求出所有极小碰集是其核心问题之一。提出一种将极小碰集问题转换为约束满足问题的方法，该方法调用成熟的CSP求解器进行求解，扩展了约束可满足问题的应用领域。首次提出hard‐冲突集和sof t‐冲突集的概念，并给出利用所提的方法分别求解具有一些特征的极小碰集：小于固定长度、不含特定元素及包含hard‐冲突集和sof t‐冲突集。实验结果表明，提出的方法易于实现、扩展性强，对于特定类型极小碰集问题的求解效率较高。%Model‐based diagnosis (MBD ) is an important challenge and touchstone for artificial intelligence (AI) and plays an important role on studying the w hole field of AI ,for revealing a lot of AI technical problems .In MBD ,candidate diagnostic results are generally described by all minimal hitting sets from all minimal conflict sets .Computing the minimal hitting sets is one of the core problems in this process .In addition ,many practical problems can be converted to minimal hitting sets by some methods ,such as the student course selection problem .In this paper ,a new method is proposed to convert minimal hitting sets problems into constraint satisfaction problems and then call a state‐of‐the‐art CSP‐solver to compute ,which extends the application areas of constraint satisfaction problems .Moreover ,the concepts of hard‐conflict sets and soft‐conflict sets are proposed at the first time . Then this paper applies this new method to compute minimal hitting sets which have some features :less than a fixed length ,not including specific elements ,and including hard‐conflict sets and soft
Ilse Baumgartner
2014-10-01
Full Text Available Since more than a decade, all kinds of businesses and organisations are intensively exploring enterprise-level information systems to better integrate their business processes, information flows and people. Consequently, the industry demands for technically skilled, but also “business-savvy” IT professionals are permanently growing. To meet this need, more and more computing education programs try to incorporate enterprise-level information systems into their curricula. While there is some computing education research done to investigate the need for this new type of IT-business professional and to analyse general implications for higher education, only very few research works or practice papers exist which report on concrete attempts to design and deliver higher education computing courses which intensively use enterprise-level systems. In this paper, the author reports on a series of experiences made within the Bachelor of Science (Information Systems Management degree program offered by the School of Information Systems (SIS at the Singapore Management University (SMU. The primary focus of this paper is put on establishing a working set of best practices for the design of an effective structure of the face-to-face teaching sessions for courses which use enterprise-level systems and applications in their curricula. While this paper is principally based on education experiences made within the frame of an Information Systems program, the best practices presented in this paper are equally applicable to any other computing education field or even to the engineering education in general.
Andrade, Xavier
2013-01-01
We discuss the application of graphical processing units (GPUs) to accelerate real-space density functional theory (DFT) calculations. To make our implementation efficient, we have developed a scheme to expose the data parallelism available in the DFT approach; this is applied to the different procedures required for a real-space DFT calculation. We present results for current-generation GPUs from AMD and Nvidia, which show that our scheme, implemented in the free code OCTOPUS, can reach a sustained performance of up to 90 GFlops for a single GPU, representing an important speed-up when compared to the CPU version of the code. Moreover, for some systems our implementation can outperform a GPU Gaussian basis set code, showing that the real-space approach is a competitive alternative for DFT simulations on GPUs.
Andrade, Xavier; Aspuru-Guzik, Alán
2013-10-01
We discuss the application of graphical processing units (GPUs) to accelerate real-space density functional theory (DFT) calculations. To make our implementation efficient, we have developed a scheme to expose the data parallelism available in the DFT approach; this is applied to the different procedures required for a real-space DFT calculation. We present results for current-generation GPUs from AMD and Nvidia, which show that our scheme, implemented in the free code Octopus, can reach a sustained performance of up to 90 GFlops for a single GPU, representing a significant speed-up when compared to the CPU version of the code. Moreover, for some systems, our implementation can outperform a GPU Gaussian basis set code, showing that the real-space approach is a competitive alternative for DFT simulations on GPUs.
Schmitz, G. J.
2016-01-01
The importance of microstructure simulation in integrated computational materials engineering settings in relation to the added value provided for macroscopic process simulation, as well as the contribution this kind of simulation can make in predicting material properties, are discussed. The roles of microstructure simulation in integrating scales ranging from component/process scales down to atomistic scales, and also in integrating experimental and virtual worlds, are highlighted. The hierarchical data format (HDF5) as a basis for enhancing the interoperability of the heterogeneous range of simulation tools and experimental datasets in the area of computational materials engineering is discussed. Several ongoing developments indicate that HDF5 might evolve into a de facto standard for digital microstructure representation of all length scales.
I. Fisk
2011-01-01
Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...
Kamburoglu, Kivanc; Sonmez, Gul; Kurt, Hakan; Berktas, Zeynep Serap [Dept. of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara (Turkmenistan); Ozen, Dogukan [Dept. of Biostatistics, Faculty of Veterinary Medicine, Ankara University, Ankara (Turkmenistan)
2017-06-15
The aim of this study was to assess the ex vivo diagnostic ability of 9 different cone-beam computed tomography (CBCT) settings in the detection of recurrent caries under amalgam restorations in primary teeth. Fifty-two primary teeth were used. Twenty-six teeth had dentine caries and 26 teeth did not have dentine caries. Black class II cavities were prepared and restored with amalgam. In the 26 carious teeth, recurrent caries were left under restorations. The other 26 intact teeth that did not have caries served as controls. Teeth were imaged using a 100×90-mm field of view and a 0.2-mm voxel size with 9 different CBCT settings. Four observers assessed the images using a 5-point scale. Kappa values were calculated to assess observer agreement. CBCT settings were compared with the gold standard using a receiver operating characteristic analysis. The area under the curve (AUC) values for each setting were compared using the chi-square test, with a significance level of α=.05. Intraobserver kappa values ranged from 0.366 to 0.664 for observer 1, from 0.311 to 0.447 for observer 2, from 0.597 to 1.000 for observer 3, and from 0.869 to 1 for observer 4. Furthermore, interobserver kappa values among the observers ranged from 0.133 to 0.814 for the first reading and from 0.197 to 0.805 for the second reading. The highest AUC values were found for setting 5 (0.5916) and setting 3 (0.5886), and were not found to be statistically significant (P>.05). Variations in tube voltage and tube current did not affect the detection of recurrent caries under amalgam restorations in primary teeth.
Bamberg, Fabian; Abbara, Suhny; Schlett, Christopher L.; Cury, Ricardo C.; Truong, Quynh A.; Rogers, Ian S. [Cardiac MR PET CT Program, Department of Radiology and Division of Cardiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Nagurney, John T. [Department of Emergency Medicine, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Brady, Thomas J. [Cardiac MR PET CT Program, Department of Radiology and Division of Cardiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Hoffmann, Udo [Cardiac MR PET CT Program, Department of Radiology and Division of Cardiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States)], E-mail: uhoffmann@partners.org
2010-04-15
Objective: We aimed to determine predictors of image quality in consecutive patients who underwent coronary computed tomography (CT) for the evaluation of acute chest pain. Method and materials: We prospectively enrolled patients who presented with chest pain to the emergency department. All subjects underwent contrast-enhanced 64-slice coronary multi-detector CT. Two experienced readers determined overall image quality on a per-patient basis and the prevalence and characteristics of non-evaluable coronary segments on a per-segment basis. Results: Among 378 subjects (143 women, age: 52.9 {+-} 11.8 years), 345 (91%) had acceptable overall image quality, while 33 (9%) had poor image quality or were unreadable. In adjusted analysis, patients with diabetes, hypertension and a higher heart rate during the scan were more likely to have exams graded as poor or unreadable (odds ratio [OR]: 2.94, p = 0.02; OR: 2.62, p = 0.03; OR: 1.43, p = 0.02; respectively). Of 6253 coronary segments, 257 (4%) were non-evaluable, most due to severe calcification in combination with motion (35%). The presence of non-evaluable coronary segments was associated with age (OR: 1.08 annually, 95%-confidence interval [CI]: 1.05-1.12, p < 0.001), baseline heart rate (OR: 1.35 per 10 beats/min, 95%-CI: 1.11-1.67, p = 0.003), diabetes, hypertension, and history of coronary artery disease (OR: 4.43, 95%-CI: 1.93-10.17, p < 0.001; OR: 2.27, 95-CI: 1.01-4.73, p = 0.03; OR: 5.12, 95%-CI: 2.0-13.06, p < 0.001; respectively). Conclusion: Coronary CT permits acceptable image quality in more than 90% of patients with chest pain. Patients with multiple risk factors are more likely to have impaired image quality or non-evaluable coronary segments. These patients may require careful patient preparation and optimization of CT scanning protocols.
M. Kasemann
CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...
Convexity of the Set of Fixed Points Generated by Some Control Systems
Vadim Azhmyakov
2009-01-01
Full Text Available We deal with an application of the fixed point theorem for nonexpansive mappings to a class of control systems. We study closed-loop and open-loop controllable dynamical systems governed by ordinary differential equations (ODEs and establish convexity of the set of trajectories. Solutions to the above ODEs are considered as fixed points of the associated system-operator. If convexity of the set of trajectories is established, this can be used to estimate and approximate the reachable set of dynamical systems under consideration. The estimations/approximations of the above type are important in various engineering applications as, for example, the verification of safety properties.
Shah, Koonal K; Lloyd, Andrew; Oppe, Mark; Devlin, Nancy J
2013-07-01
We compare two settings for administering time trade-off (TTO) tasks in computer-assisted interviews (one-to-one, interviewer-led versus group, self-complete) by examining the quality of the data generated in pilot studies undertaken in England and the Netherlands. The two studies used near-identical methods, except that in England, data were collected in one-to-one interviews with substantial amounts of interviewer assistance, whereas in the Netherlands, the computer aid was used as a self-completion tool in group interviews with lesser amounts of interviewer assistance. In total, 801 members of the general public (403 in England; 398 in the Netherlands) each completed five TTO valuations of EQ-5D-5L health states. Respondents in the Netherlands study showed a greater tendency to give 'round number' values such as 0 and 1 and to complete tasks using a minimal number of iterative steps. They also showed a greater tendency to skip the animated instructions that preceded the first task and to take into account assumptions that they were specifically asked not to take into account. When faced with a pair of health states in which one state dominated the other, respondents in the Netherlands study were more likely than those in the England study to give a higher value to the dominant health state. On the basis of these comparisons, we conclude that the one-to-one, interviewer-led setting is superior to the group, self-complete setting in terms of the quality of data generated and that the former is more suitable than the latter for TTO studies being used to value EQ-5D-5L.
BACH: A Toolset for Bounded Reachability Analysis of Linear Hybrid Systems%BACH:线性混成系统有界可达性模型检验工具
卜磊; 李游; 王林章; 李宣东
2011-01-01
混成自动机的模型检验问题非常困难,即使是其中相对简单的一个子类--线性混成自动机,它的可达性问题仍然是不可判定的.现有的相关工具大都使用多面体计算来判定线性混成自动机状态空间的可达集,复杂度高、效率低,无法解决实际应用规模的问题.描述了一个面向线性混成系统有界可达性模型检验工具--BACH(bounded reachability checker),该工具能够沿指定路径(组)对单个线性混成自动机、多个线性混成自动机的组合进行可达性检验,并且在此基础上结合路径遍历技术完成对所有路径的有界可达性检验.实验数据显示,BACH不仅在面向路径可达性检验方面性能优异,可以适用于足够长度的路径,而且在针对所有路径的有界可达性检验时,BACH可以解决的问题规模也远远超过同类工具,已接近工业界应用的要求.%The model-checking problem for hybrid systems is very difficult to resolve.Even for a relatively simple class of hybrid systems, the class of linear hybrid automata, the most common problem of reachability is unsolvable.Existing techniques for the reachability analysis of linear hybrid automata do not scale well to problem sizes of practical interest.Instead of developing a tool to perform a reachability checking of the complete state space of linear hybrid automata, a prototype toolset BACH (bounded reachability checker) is presented to perform path-oriented reachability checking and bounded reachability checking of the linear hybrid automata and the compositional linear hybrid systems, where the length of the path being checked can be made very large, and the size of the system can be made large enough to handle problems of practical interest.The experiment data shows that BACH has good performance and scalability and supports the fact that BACH can become a powerful assistant for design engineers in the reachability analysis of linear hybrid automata.
Qin, Bo; Tian, Bo; Wang, Yu-Feng; Shen, Yu-Jia; Wang, Ming
2017-10-01
Under investigation in this paper are the Belov-Chaltikian (BC), Leznov and Blaszak-Marciniak (BM) lattice equations, which are associated with the conformal field theory, UToda(m_1,m_2) system and r-matrix, respectively. With symbolic computation, the Bell-polynomial approach is developed to directly bilinearize those three sets of differential-difference nonlinear evolution equations (NLEEs). This Bell-polynomial approach does not rely on any dependent variable transformation, which constitutes the key step and main difficulty of the Hirota bilinear method, and thus has the advantage in the bilinearization of the differential-difference NLEEs. Based on the bilinear forms obtained, the N-soliton solutions are constructed in terms of the N × N Wronskian determinant. Graphic illustrations demonstrate that those solutions, more general than the existing results, permit some new properties, such as the solitonic propagation and interactions for the BC lattice equations, and the nonnegative dark solitons for the BM lattice equations.
Zhiyong Pang
2015-01-01
Full Text Available This study established a fully automated computer-aided diagnosis (CAD system for the classification of malignant and benign masses via breast magnetic resonance imaging (BMRI. A breast segmentation method consisting of a preprocessing step to identify the air-breast interfacing boundary and curve fitting for chest wall line (CWL segmentation was included in the proposed CAD system. The Chan-Vese (CV model level set (LS segmentation method was adopted to segment breast mass and demonstrated sufficiently good segmentation performance. The support vector machine (SVM classifier with ReliefF feature selection was used to merge the extracted morphological and texture features into a classification score. The accuracy, sensitivity, and specificity measurements for the leave-half-case-out resampling method were 92.3%, 98.2%, and 76.2%, respectively. For the leave-one-case-out resampling method, the measurements were 90.0%, 98.7%, and 73.8%, respectively.
Simulation and Verification of Synchronous Set Relations in Rewriting Logic
Rocha, Camilo; Munoz, Cesar A.
2011-01-01
This paper presents a mathematical foundation and a rewriting logic infrastructure for the execution and property veri cation of synchronous set relations. The mathematical foundation is given in the language of abstract set relations. The infrastructure consists of an ordersorted rewrite theory in Maude, a rewriting logic system, that enables the synchronous execution of a set relation provided by the user. By using the infrastructure, existing algorithm veri cation techniques already available in Maude for traditional asynchronous rewriting, such as reachability analysis and model checking, are automatically available to synchronous set rewriting. The use of the infrastructure is illustrated with an executable operational semantics of a simple synchronous language and the veri cation of temporal properties of a synchronous system.
Zhu, Hongbin; Duan, Chaijie; Pickhardt, Perry; Wang, Su; Liang, Zhengrong
2009-01-01
As a promising second reader of computed tomographic colonography (CTC) screening, the computer-aided detection (CAD) of colonic polyps has earned fast growing research interest. In this paper, we present a CAD scheme to automatically detect colonic polyps in CTC images. First, a thick colon wall representation, ie, a volumetric mucosa (VM) with several voxels wide in general, was segmented from CTC images by a partial-volume image segmentation algorithm. Based on the VM, we employed a level set-based adaptive convolution method for calculating the first- and second-order spatial derivatives more accurately to start the geometric analysis. Furthermore, to emphasize the correspondence among different layers in the VM, we introduced a middle-layer enhanced integration along the image gradient direction inside the VM to improve the operation of extracting the geometric information, like the principal curvatures. Initial polyp candidates (IPCs) were then determined by thresholding the geometric measurements. Based on IPCs, several features were extracted for each IPC, and fed into a support vector machine to reduce false positives (FPs). The final detections were displayed in a commercial system to provide second opinions for radiologists. The CAD scheme was applied to 26 patient CTC studies with 32 confirmed polyps by both optical and virtual colonoscopies. Compared to our previous work, all the polyps can be detected successfully with less FPs. At the 100% by polyp sensitivity, the new method yielded 3.5 FPs/dataset. PMID:20428331
Niamh M C Connolly
Full Text Available Loss of ionic homeostasis during excitotoxic stress depletes ATP levels and activates the AMP-activated protein kinase (AMPK, re-establishing energy production by increased expression of glucose transporters on the plasma membrane. Here, we develop a computational model to test whether this AMPK-mediated glucose import can rapidly restore ATP levels following a transient excitotoxic insult. We demonstrate that a highly compact model, comprising a minimal set of critical reactions, can closely resemble the rapid dynamics and cell-to-cell heterogeneity of ATP levels and AMPK activity, as confirmed by single-cell fluorescence microscopy in rat primary cerebellar neurons exposed to glutamate excitotoxicity. The model further correctly predicted an excitotoxicity-induced elevation of intracellular glucose, and well resembled the delayed recovery and cell-to-cell heterogeneity of experimentally measured glucose dynamics. The model also predicted necrotic bioenergetic collapse and altered calcium dynamics following more severe excitotoxic insults. In conclusion, our data suggest that a minimal set of critical reactions may determine the acute bioenergetic response to transient excitotoxicity and that an AMPK-mediated increase in intracellular glucose may be sufficient to rapidly recover ATP levels following an excitotoxic insult.
Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi [Department of Radiology, University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637 (United States)
2010-05-15
Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F{<=}f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less
Raja, Emans Evangel Joel; Mahal, Rajinder; Masih, Veena Barkat
2004-01-01
Explorative study conducted to assess and identify deficit areas of computer knowledge, attitudes and skills among nurses working in the hospital and to examine the relationship among these factors. 120 staff nurses were surveyed by systematic random sampling. Computer knowledge, attitudes and skills were measured by a self-structured computer knowledge questionnaire, computer attitude and skill scale respectively. Data analysis showed that the majority 75% staff nurses had good computer knowledge. 100% of nurses had positive attitudes towards computer utilization. 50.8% and 30.8% had average and fair computer skills respectively. No significant correlation was found between nurses' computer knowledge, attitude and skills. The relationships of computer knowledge, attitude and skill were analyzed among nurses with the selected variables like age, sex, designation, years of nursing service, professional qualification, area of nursing service, type of computer training received, frequency of computer usage and monthly family income. Strategies to enhance nurses' computer knowledge, attitudes and skills were proposed.
Efficient One-click Browsing of Large Trajectory Sets
Krogh, Benjamin Bjerre; Andersen, Ove; Lewis-Kelham, Edwin
2014-01-01
Traffic researchers, planners, and analysts want a simple way to query the large quantities of GPS trajectories collected from vehicles. In addition, users expect the results to be presented immediately even when querying very large transportation networks with huge trajectory data sets. This paper...... presents a novel query type called sheaf, where users can browse trajectory data sets using a single mouse click. Sheaves are very versatile and can be used for location-based advertising, travel-time analysis, intersection analysis, and reachability analysis (isochrones). A novel in-memory trajectory...... index compresses the data by a factor of 12.4 and enables execution of sheaf queries in 40 ms. This is up to 2 orders of magnitude faster than existing work. We demonstrate the simplicity, versatility, and efficiency of sheaf queries using a real-world trajectory set consisting of 2.7 million...
Armin Arbab-Zadeh
2012-06-01
Full Text Available The objective of this study was to investigate the impact of image acquisition settings and patients’ characteristics on image quality and radiation dose for coronary angiography by 320-row computed tomography (CT. CORE320 is a prospective study to investigate the diagnostic performance of 320-detector CT for detecting coronary artery disease and associated myocardial ischemia. A run-in phase in 65 subjects was conducted to test the adequacy of the computed tomography angiography (CTA acquisition protocol. Tube current, exposure window, and number of cardiac beats per acquisition were adjusted according to subjects’ gender, heart rate, and body mass index (BMI. Main outcome measures were image quality, assessed by contrast/noise measurements and qualitatively on a 4-point scale, and radiation dose, estimated by the dose-length-product. Average heart rate at image acquisition was 55.0±7.3 bpm. Median Agatston calcium score was 27.0 (interquartile range 1-330. All scans were prospectively triggered. Single heart beat image acquisition was obtained in 61 of 65 studies (94%. Sixty-one studies (94% and 437 of 455 arterial segments (96% were of diagnostic image quality. Estimated radiation dose was significantly greater in obese (5.3±0.4 mSv than normal weight (4.6±0.3 mSv or overweight (4.7±0.3 mSv subjects (P<0.001. BMI was the strongest factor influencing image quality (odds ratio=1.457, P=0.005. The CORE320 CTA image acquisition protocol achieved a good balance between image quality and radiation dose for a 320-detector CT system. However, image quality in obese subjects was reduced compared to normal weight subjects, possibly due to tube voltage/current restrictions mandated by the study protocol.
Helvacioglu-Yigit, Dilek; Demirturk Kocasarac, Husniye; Bechara, Boulos; Noujeim, Marcel
2016-02-01
After endodontic surgery, radiographic assessment is the method of choice to monitor bone defect healing. Cone-beam computed tomography scans are useful to check and identify the reasons of failure of surgical intervention or confirm healing; however, the artifact generated by some root-end filling material might compromise this task. The objective of the study was to compare the amount of artifacts generated by 4 root-end filling materials and to test multiple exposure settings used with these materials, when the effective dose generated by each protocol was taken into consideration. Twenty central incisors were endodontically treated with retrograde obturation by using amalgam, Biodentine, MTA, and Super-EBA (5 of each). They were placed in a skull with soft tissue simulation and scanned by using the Planmeca Promax Max with different kilovolt peaks (kVp): 66, 76, 84, and 96 with and without the use of metal artifact reduction (MAR) algorithm and with low, normal, and high resolution and high definition. The Dose Area Product was registered, and the effective dose was calculated. Amalgam generated the highest amount of artifacts, whereas MAR and low resolution created fewer artifacts than other settings. The artifacts were also reduced with 96 kVp. The effective dose calculated with low resolution was remarkably lower than other resolutions. When used as root-end filling material, Biodentine, MTA, and Super-EBA generated fewer artifacts than amalgam. The use of 96 kVp with MAR and low resolution also reduced artifacts on the image and at the same time generated the lowest effective dose. Copyright © 2016 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Heredia-López, Francisco J; Álvarez-Cervera, Fernando J; Collí-Alfaro, José G; Bata-García, José L; Arankowsky-Sandoval, Gloria; Góngora-Alfaro, José L
2016-12-01
Continuous spontaneous alternation behavior (SAB) in a Y-maze is used for evaluating working memory in rodents. Here, the design of an automated Y-maze equipped with three infrared optocouplers per arm, and commanded by a reduced instruction set computer (RISC) microcontroller is described. The software was devised for recording only true entries and exits to the arms. Experimental settings are programmed via a keyboard with three buttons and a display. The sequence of arm entries and the time spent in each arm and the neutral zone (NZ) are saved as a text file in a non-volatile memory for later transfer to a USB flash memory. Data files are analyzed with a program developed under LabVIEW® environment, and the results are exported to an Excel® spreadsheet file. Variables measured are: latency to exit the starting arm, sequence and number of arm entries, number of alternations, alternation percentage, and cumulative times spent in each arm and NZ. The automated Y-maze accurately detected the SAB decrease produced in rats by the muscarinic antagonist trihexyphenidyl, and its reversal by caffeine, having 100 % concordance with the alternation percentages calculated by two trained observers who independently watched videos of the same experiments. Although the values of time spent in the arms and NZ measured by the automated system had small discrepancies with those calculated by the observers, Bland-Altman analysis showed 95 % concordance in three pairs of comparisons, while in one it was 90 %, indicating that this system is a reliable and inexpensive alternative for the study of continuous SAB in rodents.
The Reachability Analysis About Zero Discharge of Wastewater in Xiaoqing Mine%小青煤矿废水零排放可达性分析
王浩
2013-01-01
煤炭开采在对地方经济做出重大贡献的同时，也对当地的环境质量造成一定破坏。小青煤矿附近区河流流量较小，冬季结冰，水体自净能力较差，其工业广场污废水排入河流后，导致河流水质经常超标。为了从根本上解决地表水污染问题，该矿在洗煤废水闭路循环的基础上提出实现全部废水的零排放，以切断对地表水的污染途径。通过分析该矿废水排放环节及废水治理措施，得出废水零排放的可达性，在煤炭企业中具有一定的推广意义。%Coal mining in the same time to make a significant contribution to the local economy,but also on the quality of the local environment,causing some damage. Small the Xiaoqing coal mine near area rivers flow,winter icing and poor self-purification capacity of the water,the Industrial Plaza sewage and waste into the river, the river water quality is often excessive. In order to fundamentally solve the problem of surface water pollution,mine is proposed on the basis of the coal washing wastewater closed loop wastewater zero emissions,to cut pollution of surface water pathways. Draw the reachability of zero discharge of wastewater through the analysis of the the mine wastewater emissions links and wastewater treatment measures,promotion of coal enterprises have certain significance.
Monleón, Daniel; Colson, Kimberly; Moseley, Hunter N B; Anklin, Clemens; Oswald, Robert; Szyperski, Thomas; Montelione, Gaetano T
2002-01-01
Rapid data collection, spectral referencing, processing by time domain deconvolution, peak picking and editing, and assignment of NMR spectra are necessary components of any efficient integrated system for protein NMR structure analysis. We have developed a set of software tools designated AutoProc, AutoPeak, and AutoAssign, which function together with the data processing and peak-picking programs NMRPipe and Sparky, to provide an integrated software system for rapid analysis of protein backbone resonance assignments. In this paper we demonstrate that these tools, together with high-sensitivity triple resonance NMR cryoprobes for data collection and a Linux-based computer cluster architecture, can be combined to provide nearly complete backbone resonance assignments and secondary structures (based on chemical shift data) for a 59-residue protein in less than 30 hours of data collection and processing time. In this optimum case of a small protein providing excellent spectra, extensive backbone resonance assignments could also be obtained using less than 6 hours of data collection and processing time. These results demonstrate the feasibility of high throughput triple resonance NMR for determining resonance assignments and secondary structures of small proteins, and the potential for applying NMR in large scale structural proteomics projects.
Kevin Ten Haaf
2017-02-01
Full Text Available The National Lung Screening Trial (NLST results indicate that computed tomography (CT lung cancer screening for current and former smokers with three annual screens can be cost-effective in a trial setting. However, the cost-effectiveness in a population-based setting with >3 screening rounds is uncertain. Therefore, the objective of this study was to estimate the cost-effectiveness of lung cancer screening in a population-based setting in Ontario, Canada, and evaluate the effects of screening eligibility criteria.This study used microsimulation modeling informed by various data sources, including the Ontario Health Insurance Plan (OHIP, Ontario Cancer Registry, smoking behavior surveys, and the NLST. Persons, born between 1940 and 1969, were examined from a third-party health care payer perspective across a lifetime horizon. Starting in 2015, 576 CT screening scenarios were examined, varying by age to start and end screening, smoking eligibility criteria, and screening interval. Among the examined outcome measures were lung cancer deaths averted, life-years gained, percentage ever screened, costs (in 2015 Canadian dollars, and overdiagnosis. The results of the base-case analysis indicated that annual screening was more cost-effective than biennial screening. Scenarios with eligibility criteria that required as few as 20 pack-years were dominated by scenarios that required higher numbers of accumulated pack-years. In general, scenarios that applied stringent smoking eligibility criteria (i.e., requiring higher levels of accumulated smoking exposure were more cost-effective than scenarios with less stringent smoking eligibility criteria, with modest differences in life-years gained. Annual screening between ages 55-75 for persons who smoked ≥40 pack-years and who currently smoke or quit ≤10 y ago yielded an incremental cost-effectiveness ratio of $41,136 Canadian dollars ($33,825 in May 1, 2015, United States dollars per life-year gained
Uehara, Masashi; Takahashi, Jun; Ikegami, Shota; Kuraishi, Shugo; Shimizu, Masayuki; Futatsugi, Toshimasa; Oba, Hiroki; Kato, Hiroyuki
2017-04-01
Pedicle screw fixation is commonly employed for the surgical correction of scoliosis but carries a risk of serious neurovascular or visceral structure events during screw insertion. To avoid these complications, we have been using a computed tomography (CT)-based navigation system during pedicle screw placement. As this could also prolong operation time, multilevel registration for pedicle screw insertion for posterior scoliosis surgery was developed to register three consecutive vertebrae in a single time with CT-based navigation. The reference frame was set either at the caudal end of three consecutive vertebrae or at one or two vertebrae inferior to the most caudal registered vertebra, and then pedicle screws were inserted into the three consecutive registered vertebrae and into the one or two adjacent vertebrae. This study investigated the perforation rates of vertebrae at zero, one, two, three, or four or more levels above or below the vertebra at which the reference frame was set. This is a retrospective, single-center, single-surgeon study. One hundred sixty-one scoliosis patients who had undergone pedicle screw fixation were reviewed. Screw perforation rates were evaluated by postoperative CT. We evaluated 161 scoliosis patients (34 boys and 127 girls; mean±standard deviation age: 14.6±2.8 years) who underwent pedicle screw fixation guided by a CT-based navigation system between March 2006 and December 2015. A total of 2,203 pedicle screws were inserted into T2-L5 using multilevel registration with CT-based navigation. The overall perforation rates for Grade 1, 2, or 3, Grade 2 or 3 (major perforations), and Grade 3 perforations (violations) were as follows: vertebrae at which the reference frame was set: 15.9%, 6.1%, and 2.5%; one vertebra above or below the reference frame vertebra: 16.5%, 4.0%, and 1.2%; two vertebrae above or below the reference frame vertebra: 20.7%, 8.7%, and 2.3%; three vertebrae above or below the reference frame vertebra: 23
Sivan, Manoj; Gallagher, Justin; Makower, Sophie; Keeling, David; Bhakta, Bipin; O'Connor, Rory J; Levesley, Martin
2014-12-12
Home-based robotic technologies may offer the possibility of self-directed upper limb exercise after stroke as a means of increasing the intensity of rehabilitation treatment. The current literature has a paucity of robotic devices that have been tested in a home environment. The aim of this research project was to evaluate a robotic device Home-based Computer Assisted Arm Rehabilitation (hCAAR) that can be used independently at home by stroke survivors with upper limb weakness. hCAAR device comprises of a joystick handle moved by the weak upper limb to perform tasks on the computer screen. The device provides assistance to the movements depending on users ability. Nineteen participants (stroke survivors with upper limb weakness) were recruited. Outcome measures performed at baseline (A0), at end of 8-weeks of hCAAR use (A1) and 1 month after end of hCAAR use (A2) were: Optotrak kinematic variables, Fugl Meyer Upper Extremity motor subscale (FM-UE), Action Research Arm Test (ARAT), Medical Research Council (MRC) and Modified Ashworth Scale (MAS), Chedoke Arm and Hand Activity Inventory (CAHAI) and ABILHAND. Two participants were unable to use hCAAR: one due to severe paresis and the other due to personal problems. The remaining 17 participants were able to use the device independently in their home setting. No serious adverse events were reported. The median usage time was 433 minutes (IQR 250 - 791 min). A statistically significant improvement was observed in the kinematic and clinical outcomes at A1. The median gain in the scores at A1 were by: movement time 19%, path length 15% and jerk 19%, FM-UE 1 point, total MAS 1.5 point, total MRC 2 points, ARAT 3 points, CAHAI 5.5 points and ABILHAND 3 points. Three participants showed clinically significant improvement in all the clinical outcomes. The hCAAR feasibility study is the first clinical study of its kind reported in the current literature; in this study, 17 participants used the robotic device independently
Sarno Giovanna
2007-12-01
Full Text Available Abstract Background Multi-detector computed tomography angiography (MDCTA has been increasingly used in the evaluation of the coronary arteries. The purpose of this study was to review the literature on the diagnostic performance of MDCTA in the acute setting, for the detection of non-ST-elevation myocardial infarction (NSTEMI and unstable angina pectoris (UAP. Methods A Pubmed and manual search of the literature published between January 2000 and June 2007 was performed. Studies were included that compared MDCTA with clinical outcome and/or CA in patients with acute chest pain, presenting at the emergency department. More specifically, studies that only included patients with initially negative cardiac enzymes suspected of having NSTEMI or UAP were included. Summary estimates of diagnostic odds ratio (DOR, sensitivity and specificity, negative (NLR and positive likelihood ratio (PLR were calculated on a patient basis. Random-effects models and summary receiver operating curve (SROC analysis were used to assess the diagnostic performance of MDCTA with 4 detectors or more. The proportion of non assessable scans (NAP on MDCTA was also evaluated. In addition, the influence of study characteristics of each study on diagnostic performance and NAP was investigated with multivariable logistic regression. Results Nine studies totalling 566 patients, were included in the meta-analysis: one randomised trial and eight prospective cohort studies. Five studies on 64-detector MDCTA and 4 studies on MDCTA with less than 64 detectors were included (32 detectors n = 1, 16 detectors n = 2, 16 and 4 detectors n = 1. Pooled DOR was 131.81 (95%CI, 50.90–341.31. The pooled sensitivity and specificity were 0.95 (95%CI, 0.90–0.98 and 0.90 (95%CI, 0.87–0.93. The pooled NLR and PLR were 0.12 (95%CI, 0.06–0.21 and 8,60 (95%CI, 5.03–14,69. The results of the logistic regressions showed that none of the investigated variables had influence on the diagnostic
Chapes, Stephen K.; Ben-Arieh, David; Wu, Chih-Hang
2016-01-01
We present an agent-based model (ABM) to simulate a hepatic inflammatory response (HIR) in a mouse infected by Salmonella that sometimes progressed to problematic proportions, known as “sepsis”. Based on over 200 published studies, this ABM describes interactions among 21 cells or cytokines and incorporates 226 experimental data sets and/or data estimates from those reports to simulate a mouse HIR in silico. Our simulated results reproduced dynamic patterns of HIR reported in the literature. As shown in vivo, our model also demonstrated that sepsis was highly related to the initial Salmonella dose and the presence of components of the adaptive immune system. We determined that high mobility group box-1, C-reactive protein, and the interleukin-10: tumor necrosis factor-α ratio, and CD4+ T cell: CD8+ T cell ratio, all recognized as biomarkers during HIR, significantly correlated with outcomes of HIR. During therapy-directed silico simulations, our results demonstrated that anti-agent intervention impacted the survival rates of septic individuals in a time-dependent manner. By specifying the infected species, source of infection, and site of infection, this ABM enabled us to reproduce the kinetics of several essential indicators during a HIR, observe distinct dynamic patterns that are manifested during HIR, and allowed us to test proposed therapy-directed treatments. Although limitation still exists, this ABM is a step forward because it links underlying biological processes to computational simulation and was validated through a series of comparisons between the simulated results and experimental studies. PMID:27556404
Optimizing Design Parameters for Sets of Concentric Tube Robots using Sampling-based Motion Planning
Baykal, Cenk; Torres, Luis G.; Alterovitz, Ron
2015-01-01
Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot’s behavior and reachable workspace. Optimizing a robot’s design by appropriately selecting tube parameters can improve the robot’s effectiveness on a procedure-and patient-specific basis. In this paper, we present an algorithm that generates sets ...
Reachability Analysis of Probabilistic Systems
D'Argenio, P. R.; Jeanett, B.; Jensen, Henrik Ejersbo
2001-01-01
than the original model, and may safely refute or accept the required property. Otherwise, the abstraction is refined and the process repeated. As the numerical analysis involved in settling the validity of the property is more costly than the refinement process, the method profits from applying...... such numerical analysis on smaller state spaces. The method is significantly enhanced by a number of novel strategies: a strategy for reducing the size of the numerical problems to be analyzed by identification of so-called {essential states}, and heuristic strategies for guiding the refinement process....
Teutsch, J
2007-01-01
It is possible to enumerate all computer programs. In particular, for every partial computable function, there is a shortest program which computes that function. f-MIN is the set of indices for shortest programs. In 1972, Meyer showed that f-MIN is Turing equivalent to 0'', the halting set with halting set oracle. This paper generalizes the notion of shortest programs, and we use various measures from computability theory to describe the complexity of the resulting "spectral sets." We show that under certain Godel numberings, the spectral sets are exactly the canonical sets 0', 0'', 0''', ... up to Turing equivalence. This is probably not true in general, however we show that spectral sets always contain some useful information. We show that immunity, or "thinness" is a useful characteristic for distinguishing between spectral sets. In the final chapter, we construct a set which neither contains nor is disjoint from any infinite arithmetic set, yet it is 0-majorized and contains a natural spectral set. Thus ...
Leh, Jayne
2011-01-01
Substantial evidence indicates that teacher-delivered schema-based instruction (SBI) facilitates significant increases in mathematics word problem solving (WPS) skills for diverse students; however research is unclear whether technology affordances facilitate superior gains in computer-mediated (CM) instruction in mathematics WPS when compared to…
Walking Reachability Of Urban Public Space：Nangang District, Harbin%城市公共空间步行可达性发展对策--以哈尔滨市南岗区为例
卫大可; 杨秋楠
2016-01-01
Public space is important for citizen outdoor iftness activities, and walking reachability is crucial to the utilization of public space. With Nangang district, Harbin case, this article conducts a survey on various urban public space for iftness activities, analyzes the factors that inlfuence walking reachability, and proposes relevant development strategies.%城市公共空间是市民开展经常性户外健身活动的重要场所，而步行可达性是决定城市公共空间在多大程度上为市民健身活动利用的关键因素。文章力求以点带面，以哈尔滨市南岗区为案例进行研究，通过对各类城市公共空间的全民健身使用情况进行调查，分析以市民健身活动为目的的城市公共空间步行可达性的影响因素，提出面向全民健身需求的城市公共空间步行可达性发展对策。
Decomposition and Simplification of Multivariate Data using Pareto Sets.
Huettenberger, Lars; Heine, Christian; Garth, Christoph
2014-12-01
Topological and structural analysis of multivariate data is aimed at improving the understanding and usage of such data through identification of intrinsic features and structural relationships among multiple variables. We present two novel methods for simplifying so-called Pareto sets that describe such structural relationships. Such simplification is a precondition for meaningful visualization of structurally rich or noisy data. As a framework for simplification operations, we introduce a decomposition of the data domain into regions of equivalent structural behavior and the reachability graph that describes global connectivity of Pareto extrema. Simplification is then performed as a sequence of edge collapses in this graph; to determine a suitable sequence of such operations, we describe and utilize a comparison measure that reflects the changes to the data that each operation represents. We demonstrate and evaluate our methods on synthetic and real-world examples.
Jensen, Margit Bak
2009-01-01
. Thus, the development from small and frequent milk meals to fewer and larger meals reported by studies of natural suckling was also found among high-fed calves on a computer-controlled milk feeder. Irrespectively of minimum number of milk portions, the low-fed calves had more unrewarded visits...... to the computer-controlled milk feeder, indicating that they were attempting to get more milk. The results of the present study suggest that offering a high milk allowance and avoiding restriction on meal pattern may result in a feeder use that more closely resembles natural suckling....
Eftekhari, Maryam; Sotoudehnama, Elaheh; Marandi, S. Susan
2016-01-01
Developing higher-order critical thinking skills as one of the central objectives of education has been recently facilitated via software packages. Whereas one such technology as computer-aided argument mapping is reported to enhance levels of critical thinking (van Gelder 2001), its application as a pedagogical tool in English as a Foreign…
Eftekhari, Maryam; Sotoudehnama, Elaheh; Marandi, S. Susan
2016-01-01
Developing higher-order critical thinking skills as one of the central objectives of education has been recently facilitated via software packages. Whereas one such technology as computer-aided argument mapping is reported to enhance levels of critical thinking (van Gelder 2001), its application as a pedagogical tool in English as a Foreign…
Boudreau, Francois; Godin, Gaston; Poirier, Paul
2011-01-01
The promotion of regular physical activity for people with type 2 diabetes poses a challenge for public health authorities. The purpose of this study was to evaluate the efficiency of a computer-tailoring print-based intervention to promote the adoption of regular physical activity among people with type 2 diabetes. An experimental design was…
普靖
2014-01-01
Through a detailed investigation and data analysis on the computer specialty and its curriculum setting in Tianshui A-gricultural School, based on the feedbacks from all sides,the writer proposed some personal views on the computer specialty and its curriculum setting in secondary vocational schools.%通过对天水农业学校计算机专业和课程设置进行详细的调查，作了数据分析，并且统计了各方面的反馈意见后提出了自己对中等职业学校计算机专业和课程设置的意见。
Weiss, I.
2007-01-01
The thesis introduces the new concept of dendroidal set. Dendroidal sets are a generalization of simplicial sets that are particularly suited to the study of operads in the context of homotopy theory. The relation between operads and dendroidal sets is established via the dendroidal nerve functor wh
Lillington, K
2001-01-01
Trinity College Dublin will participate in the major project to create an European research network of computers. Despite this, Ireland remains the only country along with Luxembourg not to be a full member of CERN. Many researchers would like this to change, so they can have full access to the facilities at CERN (2 pages).
Quigley, Mark Declan
The purpose of this researcher was to examine specific environmental, educational, and demographic factors and their influence on mathematics and science achievement. In particular, the researcher ascertained the interconnections of home computer access and social capital, with Asian American students and the effect on mathematics and science achievement. Coleman's theory on social capital and parental influence was used as a basis for the analysis of data. Subjects for this study were the base year students from the National Education Longitudinal Study of 1988 (NELS:88) and the subsequent follow-up survey data in 1990, 1992, and 1994. The approximate sample size for this study is 640 ethnic Asians from the NELS:88 database. The analysis was a longitudinal study based on the Student and Parent Base Year responses and the Second Follow-up survey of 1992, when the subjects were in 12th grade. Achievement test results from the NELS:88 data were used to measure achievement in mathematics and science. The NELS:88 test battery was developed to measure both individual status and a student's growth in a number of achievement areas. The subject's responses were analyzed by principal components factor analysis, weights, effect sizes, hierarchial regression analysis, and PLSPath Analysis. The results of this study were that prior ability in mathematics and science is a major influence in the student's educational achievement. Findings from the study support the view that home computer access has a negative direct effect on mathematics and science achievement for both Asian American males and females. None of the social capital factors in the study had either a negative or positive direct effect on mathematics and science achievement although some indirect effects were found. Suggestions were made toward increasing parental involvement in their children's academic endeavors. Computer access in the home should be considered related to television viewing and should be closely
Gooding, Thomas Michael
2011-04-19
An analytical mechanism for a massively parallel computer system automatically analyzes data retrieved from the system, and identifies nodes which exhibit anomalous behavior in comparison to their immediate neighbors. Preferably, anomalous behavior is determined by comparing call-return stack tracebacks for each node, grouping like nodes together, and identifying neighboring nodes which do not themselves belong to the group. A node, not itself in the group, having a large number of neighbors in the group, is a likely locality of error. The analyzer preferably presents this information to the user by sorting the neighbors according to number of adjoining members of the group.
Phillips, Andrew N; Pillay, Deenan; Miners, Alec H
2008-01-01
of such monitoring strategies, especially in terms of survival and resistance development. METHODS: A validated computer simulation model of HIV infection and the effect of antiretroviral therapy was used to compare survival, use of second-line regimens, and development of resistance that result from different......, the predicted proportion of potential life-years survived was 83% with viral load monitoring (switch when viral load >500 copies per mL), 82% with CD4 cell count monitoring (switch at 50% drop from peak), and 82% with clinical monitoring (switch when two new WHO stage 3 events or a WHO stage 4 event occur...
Berezin, I S
1965-01-01
Computing Methods, Volume 2 is a five-chapter text that presents the numerical methods of solving sets of several mathematical equations. This volume includes computation sets of linear algebraic equations, high degree equations and transcendental equations, numerical methods of finding eigenvalues, and approximate methods of solving ordinary differential equations, partial differential equations and integral equations.The book is intended as a text-book for students in mechanical mathematical and physics-mathematical faculties specializing in computer mathematics and persons interested in the
Capuani, Caroline; Guilbeau-Frugier, Céline; Mokrane, Fatima-Zohra; Delisle, Marie-Bernadette; Marcheix, Bertrand; Rousseau, Hervé; Telmon, Norbert; Rougé, Daniel; Dedouit, Fabrice
2014-09-01
A 27-year-old man suddenly died in hospital of acute respiratory distress syndrome secondary to severe systemic vasculitis. Multi-phase post-mortem computed tomography angiography followed by scientific autopsy of the thoracic and abdominal cavity and histology was performed, illustrating the advantages and drawbacks of such techniques. Imaging enabled us to examine the cranium, as the family refused cerebral dissection. MPMCTA revealed absence of opacification of the left middle cerebral artery. But parenchymal findings of thoracic and abdominal organs were still difficult to interpret after both imaging and macroscopic examination during the autopsy. Microscopic examination provided the definitive diagnosis of cause of death. Analysis revealed systemic vasculitis of the lung complicated by diffuse alveolar, mediastinal, splenic and retroperitoneal lesions. We were unable to determine the type of vasculitis, whether polyarteritis nodosa or microscopic polyangiitis, because of artifactual glomerular collapse. We observed some structural changes in tissue secondary to contrast agent injection, affecting the vascular system and renal parenchyma in particular. Such artifacts must be known in order to avoid misinterpreting them as pathological findings. MPMCTA and conventional autopsy are two complementary techniques showing both their specific advantages and limits which have to be known in order to choose the appropriate technique. One limit of both techniques is the detection of microscopic findings which can only be obtained by additional histological examination. This case report underlines this fact and demonstrates that caution is required in some cases if microscopic analyses are carried out after contrast agent injection.
Suyama, Kenya; Komuro, Yuichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Takada, Tomoyuki; Kawasaki, Hiromitsu; Ouchi, Keisuke
1998-02-01
This report is a user`s manual of the computer program MAIL3.1 which generates various types of cross section sets for neutron transport programs such as SIMCRI, ANISN-JR, KENO IV, KENO V, MULTI-KENO, MULTI-KENO-2 and MULTI-KENO-3.0. MAIL3.1 is a revised version of MAIL3.0 that was opened in 1990. It has all of abilities of MAIL3.0 and has two more functions as shown in following. 1. AMPX-type cross section set generating function for KENO V. 2. Enhanced function for user of 16 group Hansen-Roach library. (author)
Islam, M Mofizul; Topp, Libby; Conigrave, Katherine M; van Beek, Ingrid; Maher, Lisa; White, Ann; Rodgers, Craig; Day, Carolyn A
2012-01-01
Research with injecting drug users (IDUs) suggests greater willingness to report sensitive and stigmatised behaviour via audio computer-assisted self-interviewing (ACASI) methods than during face-to-face interviews (FFIs); however, previous studies were limited in verifying this within the same individuals at the same time point. This study examines the relative willingness of IDUs to report sensitive information via ACASI and during a face-to-face clinical assessment administered in health services for IDUs. During recruitment for a randomised controlled trial undertaken at two IDU-targeted health services, assessments were undertaken as per clinical protocols, followed by referral of eligible clients to the trial, in which baseline self-report data were collected via ACASI. Five questions about sensitive injecting and sexual risk behaviours were administered to participants during both clinical interviews and baseline research data collection. "Percentage agreement" determined the magnitude of concordance/discordance in responses across interview methods, while tests appropriate to data format assessed the statistical significance of this variation. Results for all five variables suggest that, relative to ACASI, FFI elicited responses that may be perceived as more socially desirable. Discordance was statistically significant for four of the five variables examined. Participants who reported a history of sex work were more likely to provide discordant responses to at least one socially sensitive item. In health services for IDUs, information collection via ACASI may elicit more reliable and valid responses than FFI. Adoption of a universal precautionary approach to complement individually tailored assessment of and advice regarding health risk behaviours for IDUs may address this issue.
Martin, Simon S; Pfeifer, Sophia; Wichmann, Julian L; Albrecht, Moritz H; Leithner, Doris; Lenga, Lukas; Scholtz, Jan-Erik; Vogl, Thomas J; Bodelle, Boris
2017-03-01
The aim of this study was to evaluate the impact of a noise-optimized virtual monoenergetic imaging (VMI+) reconstruction technique on quantitative and qualitative image analysis in patients with gastrointestinal stromal tumors (GISTs) at dual-energy computed tomography (DECT) of the abdomen. Forty-five DECT datasets of 21 patients (14 men; 63.7 ± 9.2 years) with GISTs were reconstructed with the standard linearly blended (M_0.6) and VMI+ and traditional virtual monoenergetic (VMI) algorithm in 10-keV increments from 40 to 100 keV. Attenuation measurements were performed in GIST lesions and abdominal metastases to calculate objective signal-to-noise (SNR) and contrast-to-noise ratios (CNR). Five-point scales were used to evaluate overall image quality, lesion delineation, image sharpness, and image noise. Quantitative image parameters peaked at 40-keV VMI+ series (SNR 27.8 ± 13.0; CNR 26.3 ± 12.7), significantly superior to linearly blended (SNR 16.8 ± 7.3; CNR 13.6 ± 6.9) and all VMI series (all P VMI+ reconstructions regarding overall image quality and image sharpness (median 5, respectively; P ≤ 0.023). Qualitative assessment of lesion delineation peaked in 40 and 50-keV VMI+ series (median 5, respectively). Image noise was superior in 90 and 100-keV VMI and VMI+ reconstructions (all medians 5). Low-keV VMI+ reconstructions significantly increase SNR and CNR of GISTs and improve quantitative and qualitative image quality of abdominal DECT datasets compared to traditional VMI and standard linearly blended image series.
Englmeier, K.H.; Jovanovic, A.; Muehling, M.; Seemann, M.D. [Nuklearmedizinische Klinik und Poliklinik, rechts der Isar, Technische Univ. Muenchen (DE). Inst. fuer Medizinische Informatik (Germany)
2006-07-01
Cancer is the second leading cause of death in the western world. Early diagnosis and targeted therapy provide the basis for planning effective treatment. Diseases like cancer generally begin with alterations at the molecular level. When the number of affected cells reaches the threshold for anatomical change, the disease can already be so advanced that it is too late for successful treatment. Equally, morphological change is not necessarily associated with malignancy. Diagnosis and staging of cancer, and evaluation of therapeutic success, depend today to a large extent on imaging techniques like computed tomography (CT) and magnetic resonance tomography (MRT). However, these can only be used to identify anatomical changes in lesions. Increasingly, imaging techniques like positron emission tomography (PET) are used that are based at the molecular level. Functional imaging techniques detect functional changes in tissues that play a role in the diagnosis and staging of the disease. Virtual reality techniques, especially virtual endoscopy, have also become more important in PET-CT diagnostics. On the one hand, these methods enable simultaneous visualisation of morphological and metabolic relationships; on the other hand, the enormous amount of image data from PET-CT can be made understandable in an intuitive form with the aid of endoluminal three-dimensional (3-D) images and scenes. The prerequisite is that the image data from PET and CT are segmented after fusion using image analysis techniques and subsequently projected using a 3-D imaging system and virtual reality techniques. Together with the Department of Nuclear Medicine of the Technical University, Munich, a system has been developed for virtual bronchoscopy that is intended to improve diagnostic precision in cases of bronchial carcinoma. (orig.)
Polya Theory for Orbiquotient Sets
Blandin, Hector; Diaz, Rafael
2005-01-01
Replacing the usual notion of quotient sets by the notion of orbiquotient sets we obtain a generalization of P\\'olya theory. The key ingredient of our extended theory is the definition of the orbicycle index polynomial which we compute in several examples. We apply our theory to the study of orbicycles on orbiquotient sets. Keywords: Orbifolds, P\\'olya Theory, Partition Lattice.
Vatrapu, Ravi; Mukkamala, Raghava Rao; Hussain, Abid
2016-01-01
Current analytical approaches in computational social science can be characterized by four dominant paradigms: text analysis (information extraction and classification), social network analysis (graph theory), social complexity analysis (complex systems science), and social simulations (cellular...... automata and agent-based modeling). However, when it comes to organizational and societal units of analysis, there exists no approach to conceptualize, model, analyze, explain, and predict social media interactions as individuals' associations with ideas, values, identities, and so on. To address...... this limitation, based on the sociology of associations and the mathematics of set theory, this paper presents a new approach to big data analytics called social set analysis. Social set analysis consists of a generative framework for the philosophies of computational social science, theory of social data...
李敏
2012-01-01
对任意正整数n,设它的所有正因子为a1,a2,a3,…,ak,将其中任意2个互素的正因子放在一起,组成一个集合,称这个集合为互素的二元组集合,记作D2(n)；其中任意3个两两互素的正因子放在一起的集合称为互素的三元组集合,记作D3(n)；以此类推,其中任意r个两两互素的正因子放在一起的集合称为互素的r元组集合,记作Dr(n).Amarnath Murthy及CharlesAshbacher曾研究了Dr(n)的算术性质,同时提出了一些有关Dr(n)的阶数的计算问题和猜想.利用初等方法进一步研究Dr(n)的性质,并对一些特殊的n,给出| D2(n)|和| D3(n)|的一个确切的计算公式.%For any positive integer n.let a1 ,a2 ,a3 … ,ak be the divisors of n. All pairs of co-prime divisors are chosen to be a set,the set is called as a pairwise co-prime set D2(n). All three of divisors which are co-prime are chosen to be a set,the set is called as a triplet co-prime set D3 (n). And so,all r pairs of divisors which are co-prime to be a r co-prime set Dr(n). Amarnath Murthy and Charles Ashbacher had studied the arithmetical properties of Dr(n) ,at the same time, they also proposed a series of problems and conjectures related to Dr(n). In the paper,the elementary methods are used to give an exact computational formula for |D2(n) | and | D3(n) | with some special integers n.
Elizabeth B Hirsch
Full Text Available BACKGROUND: The use of tablet computers and other touch screen technology within the healthcare system has rapidly expanded. It has been reported that these devices can harbor pathogens in hospitals; however, much less is known about what pathogens they can harbor when used outside the hospital environment compared to hospital practice. METHODS: Thirty iPads belonging to faculty with a variety of practice settings were sampled to determine the presence and quantity of clinically-relevant organisms. Flocked nylon swabs and neutralizer solution were used to sample the surface of each iPad. Samples were then plated on a variety of selective agars for presence and quantity of selected pathogens. In addition, faculty members were surveyed to classify the physical location of their practice settings and usage patterns. Continuous variables were compared via an unpaired Student's t test with two-tailed distribution; categorical variables were compared with the Fisher's exact test. RESULTS: Of the iPads sampled, 16 belonged to faculty practicing within a hospital and 14 belonged to a faculty member practicing outside a hospital. More faculty within the hospital group used their iPads at their practice sites (78.6% vs. 31.3%; p = 0.014 and within patient care areas (71.4% vs. 18.8%; p = 0.009 than the non-hospital group. There were no differences in the presence, absence, or quantity of, any of the pathogens selectively isolated between groups. Problematic nosocomial pathogens such as methicillin-resistant Staphylococcus aureus (MRSA, vancomycin-resistant enterococci (VRE, and P. aeruginosa were isolated from both hospital and non-hospital faculty iPads. CONCLUSIONS: Gram positive and Gram negative organisms were recovered from the surfaces of iPads regardless of practice setting; these included problematic multidrug-resistant pathogens like MRSA, VRE, and Pseudomonas aeruginosa. Healthcare personnel in all settings should be aware of the potential for
Barbara Bryant
Full Text Available In living cells, DNA is packaged along with protein and RNA into chromatin. Chemical modifications to nucleotides and histone proteins are added, removed and recognized by multi-functional molecular complexes. Here I define a new computational model, in which chromatin modifications are information units that can be written onto a one-dimensional string of nucleosomes, analogous to the symbols written onto cells of a Turing machine tape, and chromatin-modifying complexes are modeled as read-write rules that operate on a finite set of adjacent nucleosomes. I illustrate the use of this "chromatin computer" to solve an instance of the Hamiltonian path problem. I prove that chromatin computers are computationally universal--and therefore more powerful than the logic circuits often used to model transcription factor control of gene expression. Features of biological chromatin provide a rich instruction set for efficient computation of nontrivial algorithms in biological time scales. Modeling chromatin as a computer shifts how we think about chromatin function, suggests new approaches to medical intervention, and lays the groundwork for the engineering of a new class of biological computing machines.
Incremental computation of set difference views
Bækgaard, Lars; Mark, Leo
1997-01-01
Et database view identificerer en delmængde af en database, der er relevant i en given sammenhæng. Et sådant view kan beregnes på to forskellige måder. Ved genberegning beregnes alt forfra, når et view anvendes. Ved inkrementel beregning opdateres gamle beregningsresultater i forhold til mellemli...
Implementation of Steiner point of fuzzy set.
Liang, Jiuzhen; Wang, Dejiang
2014-01-01
This paper deals with the implementation of Steiner point of fuzzy set. Some definitions and properties of Steiner point are investigated and extended to fuzzy set. This paper focuses on establishing efficient methods to compute Steiner point of fuzzy set. Two strategies of computing Steiner point of fuzzy set are proposed. One is called linear combination of Steiner points computed by a series of crisp α-cut sets of the fuzzy set. The other is an approximate method, which is trying to find the optimal α-cut set approaching the fuzzy set. Stability analysis of Steiner point of fuzzy set is also studied. Some experiments on image processing are given, in which the two methods are applied for implementing Steiner point of fuzzy image, and both strategies show their own advantages in computing Steiner point of fuzzy set.
Rough sets and near sets in medical imaging: a review.
Hassanien, Aboul Ella; Abraham, Ajith; Peters, James F; Schaefer, Gerald; Henry, Christopher
2009-11-01
This paper presents a review of the current literature on rough-set- and near-set-based approaches to solving various problems in medical imaging such as medical image segmentation, object extraction, and image classification. Rough set frameworks hybridized with other computational intelligence technologies that include neural networks, particle swarm optimization, support vector machines, and fuzzy sets are also presented. In addition, a brief introduction to near sets and near images with an application to MRI images is given. Near sets offer a generalization of traditional rough set theory and a promising approach to solving the medical image correspondence problem as well as an approach to classifying perceptual objects by means of features in solving medical imaging problems. Other generalizations of rough sets such as neighborhood systems, shadowed sets, and tolerance spaces are also briefly considered in solving a variety of medical imaging problems. Challenges to be addressed and future directions of research are identified and an extensive bibliography is also included.
东野长磊
2011-01-01
This paper designs a embedded Reduced Instruction Set Computer(RISC) Central Processing Unit(CPU) based on Field Programmable Gate Array(FPGA) platform. The instruction set is designed refer to Microprocessor without Interlocked Pipeline Stage(MIPS) instruction set principle. By analyzing the process of each instruction, the 5-stage pipeline of embedded CPU is built. It adopts data forwarding technology and software compiler method to solve pipeline-related problem. The key modules of CPU: Arithmetic Logic Unit(ALU), control unit, instruction cache are designed. Verification results show that the embedded RISC CPU speed and stability meet the design requirements.%基于现场可编程门阵列(FPGA)平台,设计嵌入式精简指令集计算机(RISC)中央处理器(CPU).参考无内部互锁流水级微处理器(MIPS)指令集制定原则设计CPU指令集,通过分析指令处理过程构建嵌入式CPU的5级流水线,结合数据前推技术和软件编译方法解决流水线相关性问题,并实现CPU的算术逻辑单元、控制单元、指令cache等关键模块设计.验证结果表明,该嵌入式RISC CPU的速度和稳定性均达到设计要求.
Herschberg, I. S.; Mebius, J. E.
1989-08-01
The Sappho epigram mentioned in the title is shown to contain implicit astronomical information, which must have contributed to the expressiveness of Sappho's short poem to contemporary audiences. Astronomical computations are given to discover the earliest and the latest time of year for which the Pleiads set at midnight while being visible earlier in the evening, taking into account the atmospheric refraction. The time of year for which Sappho's poem is valid is concluded to run from 17 Jan. to 29 Mar.
Campbell, Gardner
2007-01-01
In this article, the author relates the big role of computers in his life as a writer. The author narrates that he has been using a computer for nearly twenty years now. He relates that computers has set his writing free. When he started writing, he was just using an electric typewriter. He also relates that his romance with computers is also a…
Burgin, Mark
2010-01-01
Continuous models used in physics and other areas of mathematics applications become discrete when they are computerized, e.g., utilized for computations. Besides, computers are controlling processes in discrete spaces, such as films and television programs. At the same time, continuous models that are in the background of discrete representations use mathematical technology developed for continuous media. The most important example of such a technology is calculus, which is so useful in physics and other sciences. The main goal of this paper is to synthesize continuous features and powerful technology of the classical calculus with the discrete approach of numerical mathematics and computational physics. To do this, we further develop the theory of fuzzy continuous functions and apply this theory to functions defined on discrete sets. The main interest is the classical Intermediate Value theorem. Although the result of this theorem is completely based on continuity, utilization of a relaxed version of contin...
Vatrapu, Ravi; Hussain, Abid; Buus Lassen, Niels
2015-01-01
of Facebook or Twitter data. However, there exist no other holistic computational social science approach beyond the relational sociology and graph theory of SNA. To address this limitation, this paper presents an alternative holistic approach to Big Social Data analytics called Social Set Analysis (SSA......This paper argues that the basic premise of Social Network Analysis (SNA) -- namely that social reality is constituted by dyadic relations and that social interactions are determined by structural properties of networks-- is neither necessary nor sufficient, for Big Social Data analytics......). Based on the sociology of associations and the mathematics of classical, fuzzy and rough set theories, this paper proposes a research program. The function of which is to design, develop and evaluate social set analytics in terms of fundamentally novel formal models, predictive methods and visual...
Svozil, K. [Univ. of Technology, Vienna (Austria)
1995-11-01
Inasmuch as physical theories are formalizable, set theory provides a framework for theoretical physics. Four speculations about the relevance of set theoretical modeling for physics are presented: the role of transcendental set theory (i) in chaos theory, (ii) for paradoxical decompositions of solid three-dimensional objects, (iii) in the theory of effective computability (Church-Turing thesis) related to the possible {open_quotes}solution of supertasks,{close_quotes} and (iv) for weak solutions. Several approaches to set theory and their advantages and disadvantages for physical applications are discussed: Cantorian {open_quotes}naive{close_quotes} (i.e., nonaxiomatic) set theory, contructivism, and operationalism. In the author`s opinion, an attitude, of {open_quotes}suspended attention{close_quotes} (a term borrowed from psychoanalysis) seems most promising for progress. Physical and set theoretical entities must be operationalized wherever possible. At the same time, physicists should be open to {open_quotes}bizarre{close_quotes} or {open_quotes}mindboggling{close_quotes} new formalisms, which need not be operationalizable or testable at the time of their creation, but which may successfully lead to novel fields of phenomenology and technology.
Comparing two sets without disclosing them
LI ShunDong; DAI YiQi; WANG DaoShun; LUO Ping
2008-01-01
Secure multiparty computation has become a central research focus in the international cryptographic community, Secure comparing two sets is an important problem in secure multiparty computation. The research on privately determining whether two sets are equal has not been investigated, This study solves the prob-lem by mapping these sets into natural numbers and then comparing correspond-ing numbers. We propose two secure multiparty computation protocols for com-paring two sets. It is proved by well-accepted simulation paradigm that these solutions are private in semi-honest model, These solutions have important significance in constructing other secure multiparty computation protocols.
Chongtay, Rocio; Robering, Klaus
2016-01-01
for the acquisition of Computational Literacy at basic educational levels, focus on higher levels of education has been much less prominent. The present paper considers the case of courses for higher education programs within the Humanities. A model is proposed which conceives of Computational Literacy as a layered...... set of skills rather than one single skill. Skills acquisition at these layers can be tailored to the specific needs of students. The work presented here builds upon experience from courses for such students from the Humanities in which programming is taught as a tool for other purposes. Results...
Fast search algorithms for computational protein design.
Traoré, Seydou; Roberts, Kyle E; Allouche, David; Donald, Bruce R; André, Isabelle; Schiex, Thomas; Barbe, Sophie
2016-05-01
One of the main challenges in computational protein design (CPD) is the huge size of the protein sequence and conformational space that has to be computationally explored. Recently, we showed that state-of-the-art combinatorial optimization technologies based on Cost Function Network (CFN) processing allow speeding up provable rigid backbone protein design methods by several orders of magnitudes. Building up on this, we improved and injected CFN technology into the well-established CPD package Osprey to allow all Osprey CPD algorithms to benefit from associated speedups. Because Osprey fundamentally relies on the ability of A* to produce conformations in increasing order of energy, we defined new A* strategies combining CFN lower bounds, with new side-chain positioning-based branching scheme. Beyond the speedups obtained in the new A*-CFN combination, this novel branching scheme enables a much faster enumeration of suboptimal sequences, far beyond what is reachable without it. Together with the immediate and important speedups provided by CFN technology, these developments directly benefit to all the algorithms that previously relied on the DEE/ A* combination inside Osprey* and make it possible to solve larger CPD problems with provable algorithms.
Evgeniy K. Khenner
2016-01-01
Full Text Available Abstract. The aim of the research is to draw attention of the educational community to the phenomenon of computational thinking which actively discussed in the last decade in the foreign scientific and educational literature, to substantiate of its importance, practical utility and the right on affirmation in Russian education.Methods. The research is based on the analysis of foreign studies of the phenomenon of computational thinking and the ways of its formation in the process of education; on comparing the notion of «computational thinking» with related concepts used in the Russian scientific and pedagogical literature.Results. The concept «computational thinking» is analyzed from the point of view of intuitive understanding and scientific and applied aspects. It is shown as computational thinking has evolved in the process of development of computers hardware and software. The practice-oriented interpretation of computational thinking which dominant among educators is described along with some ways of its formation. It is shown that computational thinking is a metasubject result of general education as well as its tool. From the point of view of the author, purposeful development of computational thinking should be one of the tasks of the Russian education.Scientific novelty. The author gives a theoretical justification of the role of computational thinking schemes as metasubject results of learning. The dynamics of the development of this concept is described. This process is connected with the evolution of computer and information technologies as well as increase of number of the tasks for effective solutions of which computational thinking is required. Author substantiated the affirmation that including «computational thinking » in the set of pedagogical concepts which are used in the national education system fills an existing gap.Practical significance. New metasubject result of education associated with
Avery, John Scales; Rettrup, Sten; Avery, James Emil
In theoretical physics, theoretical chemistry and engineering, one often wishes to solve partial differential equations subject to a set of boundary conditions. This gives rise to eigenvalue problems of which some solutions may be very difficult to find. For example, the problem of finding...... eigenfunctions and eigenvalues for the Hamiltonian of a many-particle system is usually so difficult that it requires approximate methods, the most common of which is expansion of the eigenfunctions in terms of basis functions that obey the boundary conditions of the problem. The computational effort needed...
Flesch, Benjamin; Hussain, Abid; Vatrapu, Ravi
2015-01-01
application domain for the dashboard is Corporate Social Responsibility (CSR) and the targeted end-users are CSR researchers and practitioners. The design of the dashboard was based on the "social set analytics" approach to computational social science. The development of the dash-board involved cutting......This paper presents a state-of-the art visual analytics dash-board, Social Set Visualizer (SoSeVi), of approximately 90 million Facebook actions from 11 different companies that have been mentioned in the traditional media in relation to garment factory accidents in Bangladesh. The enterprise......-edge open source visual analytics libraries from D3.js and creation of new visualizations (ac-tor mobility across time, conversational comets etc). Evaluation of the dashboard consisting of technical testing, usability testing, and domain-specific testing with CSR students and yielded positive results....
Hacker, M.; Hack, N.; Tiling, R. [Klinikum Grosshadern (Germany). Dept. of Nuclear Medicine; Jakobs, T.; Nikolaou, K.; Becker, C. [Klinikum Grosshadern (Germany). Dept. of Clinical Radiology; Ziegler, F. von; Knez, A. [Klinikum Grosshadern (Germany). Dept. of Cardiology; Koenig, A.; Klauss, V. [Medizinische Poliklinik-Innenstadt, Univ. of Munich (Germany). Dept. of Cardiology
2007-07-01
Aim: In patients with stable angina pectoris both morphological and functional information about the coronary artery tree should be present before revascularization therapy is performed. High accuracy was shown for spiral computed tomography (MDCT) angiography acquired with a 64-slice CT scanner compared to invasive coronary angiography (ICA) in detecting ''obstructive'' coronary artery disease (CAD). Gated myocardial SPECT (MPI) is an established method for the noninvasive assessment of functional significance of coronary stenoses. Aim of the study was to evaluate the combination of 64-slice CT angiography plus MPI in comparison to ICA plus MPI in the detection of hemodynamically relevant coronary artery stenoses in a clinical setting. Patients, methods: 30 patients (63 {+-} 10.8 years, 23 men) with stable angina (21 with suspected, 9 with known CAD) were investigated. MPI, 64-slice CT angiography and ICA were performed, reversible and fixed perfusion defects were allocated to determining lesions separately for MDCT angiography and ICA. The combination of MDCT angiography plus MPI was compared to the results of ICA plus MPI. Results: Sensitivity, specificity, negative and positive predictive value for the combination of MDCT angiography plus MPI was 85%, 97%, 98% and 79%, respectively, on a vessel-based and 93%, 87%, 93% and 88%, respectively, on a patient-based level. 19 coronary arteries with stenoses =50% in both ICA and MDCT angiography showed no ischemia in MPI. Conclusion: The combination of 64-slice CT angiography and gated myocardial SPECT enabled a comprehensive non-invasive view of the anatomical and functional status of the coronary artery tree. (orig.)
Reachability Analysis Applied to Space Situational Awareness
2009-09-01
corresponding initial conditions xT0 = [d T 0 vT0 ] satisfying (5) are found by solving the following system of equations [19]: dfdf 0 = Mφz(tf ; x0...the corresponding initial direction magnitude d and velocity v0 satisfying (5) are found by solving the following equations: dfdf 0 = Mφz(tf
Computational intelligence in optimization
Tenne, Yoel
2010-01-01
This volume presents a collection of recent studies covering the spectrum of computational intelligence applications with emphasis on their application to challenging real-world problems. Topics covered include: Intelligent agent-based algorithms, Hybrid intelligent systems, Cognitive and evolutionary robotics, Knowledge-Based Engineering, fuzzy sets and systems, Bioinformatics and Bioengineering, Computational finance and Computational economics, Data mining, Machine learning, and Expert systems. ""Computational Intelligence in Optimization"" is a comprehensive reference for researchers, prac
Computational physics an introduction
Vesely, Franz J
1994-01-01
Author Franz J. Vesely offers students an introductory text on computational physics, providing them with the important basic numerical/computational techniques. His unique text sets itself apart from others by focusing on specific problems of computational physics. The author also provides a selection of modern fields of research. Students will benefit from the appendixes which offer a short description of some properties of computing and machines and outline the technique of 'Fast Fourier Transformation.'
Cloud Computing for radiologists.
Kharat, Amit T; Safvi, Amjad; Thind, Ss; Singh, Amarjit
2012-07-01
Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.
Gennery, D.; Cunningham, R.; Saund, E.; High, J.; Ruoff, C.
1981-01-01
The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed.
Fast Sparse Level Sets on Graphics Hardware.
Jalba, Andrei C; van der Laan, Wladimir J; Roerdink, Jos B T M
2013-01-01
The level-set method is one of the most popular techniques for capturing and tracking deformable interfaces. Although level sets have demonstrated great potential in visualization and computer graphics applications, such as surface editing and physically based modeling, their use for interactive simulations has been limited due to the high computational demands involved. In this paper, we address this computational challenge by leveraging the increased computing power of graphics processors, to achieve fast simulations based on level sets. Our efficient, sparse GPU level-set method is substantially faster than other state-of-the-art, parallel approaches on both CPU and GPU hardware. We further investigate its performance through a method for surface reconstruction, based on GPU level sets. Our novel multiresolution method for surface reconstruction from unorganized point clouds compares favorably with recent, existing techniques and other parallel implementations. Finally, we point out that both level-set computations and rendering of level-set surfaces can be performed at interactive rates, even on large volumetric grids. Therefore, many applications based on level sets can benefit from our sparse level-set method.
Introduction to morphogenetic computing
Resconi, Germano; Xu, Guanglin
2017-01-01
This book offers a concise introduction to morphogenetic computing, showing that its use makes global and local relations, defects in crystal non-Euclidean geometry databases with source and sink, genetic algorithms, and neural networks more stable and efficient. It also presents applications to database, language, nanotechnology with defects, biological genetic structure, electrical circuit, and big data structure. In Turing machines, input and output states form a system – when the system is in one state, the input is transformed into output. This computation is always deterministic and without any possible contradiction or defects. In natural computation there are defects and contradictions that have to be solved to give a coherent and effective computation. The new computation generates the morphology of the system that assumes different forms in time. Genetic process is the prototype of the morphogenetic computing. At the Boolean logic truth value, we substitute a set of truth (active sets) values with...
Cohomotopy sets of 4-manifolds
Kirby, Robion; Teichner, Peter
2012-01-01
Elementary geometric arguments are used to compute the group of homotopy classes of maps from a 4-manifold X to the 3-sphere, and to enumerate the homotopy classes of maps from X to the 2-sphere. The former completes a project initiated by Steenrod in the 1940's, and the latter provides geometric arguments for and extensions of recent homotopy theoretic results of Larry Taylor. These two results complete the computation of all the cohomotopy sets of closed oriented 4-manifolds and provide a framework for the study of Morse 2-functions on 4-manifolds, a subject that has garnered considerable recent attention.
On Intuitionistic Fuzzy Sets Theory
Atanassov, Krassimir T
2012-01-01
This book aims to be a comprehensive and accurate survey of state-of-art research on intuitionistic fuzzy sets theory and could be considered a continuation and extension of the author´s previous book on Intuitionistic Fuzzy Sets, published by Springer in 1999 (Atanassov, Krassimir T., Intuitionistic Fuzzy Sets, Studies in Fuzziness and soft computing, ISBN 978-3-7908-1228-2, 1999). Since the aforementioned book has appeared, the research activity of the author within the area of intuitionistic fuzzy sets has been expanding into many directions. The results of the author´s most recent work covering the past 12 years as well as the newest general ideas and open problems in this field have been therefore collected in this new book.
Learning with Ubiquitous Computing
Rosenheck, Louisa
2008-01-01
If ubiquitous computing becomes a reality and is widely adopted, it will inevitably have an impact on education. This article reviews the background of ubiquitous computing and current research projects done involving educational "ubicomp." Finally it explores how ubicomp may and may not change education in both formal and informal settings and…
Accuracy in Robot Generated Image Data Sets
Aanæs, Henrik; Dahl, Anders Bjorholm
2015-01-01
In this paper we present a practical innovation concerning how to achieve high accuracy of camera positioning, when using a 6 axis industrial robots to generate high quality data sets for computer vision. This innovation is based on the realization that to a very large extent the robots positioning...... error is deterministic, and can as such be calibrated away. We have successfully used this innovation in our efforts for creating data sets for computer vision. Since the use of this innovation has a significant effect on the data set quality, we here present it in some detail, to better aid others...... in using robots for image data set generation....
Tally NP Sets and Easy Census Functions
Goldsmith, Judy; Ogihara, Mitsunori; Rothe, Joerg
1998-01-01
We study the question of whether every P set has an easy (i.e., polynomial-time computable) census function. We characterize this question in terms of unlikely collapses of language and function classes such as the containment of #P_1 in FP, where #P_1 is the class of functions that count the witnesses for tally NP sets. We prove that every #P_{1}^{PH} function can be computed in FP^{#P_{1}^{#P_{1}}}. Consequently, every P set has an easy census function if and only if every set in the polyno...
Discrete computational structures
Korfhage, Robert R
1974-01-01
Discrete Computational Structures describes discrete mathematical concepts that are important to computing, covering necessary mathematical fundamentals, computer representation of sets, graph theory, storage minimization, and bandwidth. The book also explains conceptual framework (Gorn trees, searching, subroutines) and directed graphs (flowcharts, critical paths, information network). The text discusses algebra particularly as it applies to concentrates on semigroups, groups, lattices, propositional calculus, including a new tabular method of Boolean function minimization. The text emphasize
Suzan Lema Tamer
2010-03-01
, there was nobody except for the researchers in the labs during the observations.The collected data were descriptively analyzed according to the previously constructed conceptual framework of the study. The data were summarized under the six dimensions of the observation form. The observed values for each dimension were tabulated and compared with the suggested criteria and standards in the literature. The values within the acceptable ranges were considered to comply with the ergonomic standards and others without the ranges were considered as inappropriate conditions that should be improved.Regarding physical characteristics, all three labs have acceptable square and volume levels per student, lighting devices, lighting control, and have curtains to control daylight. However, all three labs do not take the advantage of natural lighting. One lab has unsafe electricity installation and two labs have reachable circuit breakers. Al three labs have ideal conditions with regards to humidity and temperature levels. As far as the noise levels are concerned, all three labs produce disturbing noise levels while all the devices are in closed conditions (the suggested level should be lower than 35 dB. On the other hand, only one lab produces disturbing noise while all the devices are in working condition (the suggested level should be no more than 50 dB. Moreover, all three labs fail to comply with the most of the ergonomic standards for desks and chairs such as inadequate width of desks and chairs, the lack of adjustment of height, armrests and backrests on the chairs, and swivel feature of chairs. Concerning technical features, all the labs have appropriate monitor and keyboard settings except for the height of keyboard location.In conclusion, the results reveals that the physical features of computer labs, monitor features, relative humidity and temperature levels are in agreement with the ergonomic criteria. However, desks and chairs, keyboard features, and noise levels fail to
Suzan Lema Tamer
2010-03-01
, there was nobody except for the researchers in the labs during the observations.The collected data were descriptively analyzed according to the previously constructed conceptual framework of the study. The data were summarized under the six dimensions of the observation form. The observed values for each dimension were tabulated and compared with the suggested criteria and standards in the literature. The values within the acceptable ranges were considered to comply with the ergonomic standards and others without the ranges were considered as inappropriate conditions that should be improved.Regarding physical characteristics, all three labs have acceptable square and volume levels per student, lighting devices, lighting control, and have curtains to control daylight. However, all three labs do not take the advantage of natural lighting. One lab has unsafe electricity installation and two labs have reachable circuit breakers. Al three labs have ideal conditions with regards to humidity and temperature levels. As far as the noise levels are concerned, all three labs produce disturbing noise levels while all the devices are in closed conditions (the suggested level should be lower than 35 dB. On the other hand, only one lab produces disturbing noise while all the devices are in working condition (the suggested level should be no more than 50 dB. Moreover, all three labs fail to comply with the most of the ergonomic standards for desks and chairs such as inadequate width of desks and chairs, the lack of adjustment of height, armrests and backrests on the chairs, and swivel feature of chairs. Concerning technical features, all the labs have appropriate monitor and keyboard settings except for the height of keyboard location.In conclusion, the results reveals that the physical features of computer labs, monitor features, relative humidity and temperature levels are in agreement with the ergonomic criteria. However, desks and chairs, keyboard features, and noise levels fail to
S-parameter uncertainty computations
Vidkjær, Jens
1993-01-01
A method for computing uncertainties of measured s-parameters is presented. Unlike the specification software provided with network analyzers, the new method is capable of calculating the uncertainties of arbitrary s-parameter sets and instrument settings.......A method for computing uncertainties of measured s-parameters is presented. Unlike the specification software provided with network analyzers, the new method is capable of calculating the uncertainties of arbitrary s-parameter sets and instrument settings....
Computational thinking as an emerging competence domain
Yadav, A.; Good, J.; Voogt, J.; Fisser, P.; Mulder, M.
2016-01-01
Computational thinking is a problem-solving skill set, which includes problem decomposition, algorithmic thinking, abstraction, and automation. Even though computational thinking draws upon concepts fundamental to computer science (CS), it has broad application to all disciplines. It has been
Baranowska-Łączkowska, Angelika; Bartkowiak, Wojciech; Góra, Robert W; Pawłowski, Filip; Zaleśny, Robert
2013-04-05
Static longitudinal electric dipole (hyper)polarizabilities are calculated for six medium-sized π-conjugated organic molecules using recently developed LPol-n basis set family to assess their performance. Dunning's correlation-consistent basis sets of triple-ζ quality combined with MP2 method and supported by CCSD(T)/aug-cc-pVDZ results are used to obtain the reference values of analyzed properties. The same reference is used to analyze (hyper)polarizabilities predicted by selected exchange-correlation functionals, particularly those asymptotically corrected.
Set signatures and their applications
WU ChuanKun
2009-01-01
There are many constraints In the use of digital signatures. This paper proposes a new way of using digital signatures with some restrictions, i.e. set signatures. It works in such a way that when the signing algorithm Is given, one can use it to create a valid signature on a message if and only if the message belongs to a pre-defined set, and given the information about the signing algorithm, It is computationally Infeasible to create valid signatures on any other arbitrary messages outside of the set. This special property enables the signing algorithm to be made public, which seems to contradict with the traditional signature where a private key Is needed, which must be kept secret. What makes the problem challenging is that the signing algorithm does not reveal the secret signing key, and hence forging normal signatures for arbitrary messages is computationaUy Infeasible. In many cases, the signing algorithm does not reveal the elements in the authorized set. As an application of the new concept, set signatures for intelligent mobile agents committing "smaller than" condition Is studied, which shows the applicability of set signatures on small sets.
Holz, Elisa Mira
2016-01-01
Brain-computer interfaces (BCIs) are devices that translate signals from the brain into control commands for applications. Within the last twenty years, BCI applications have been developed for communication, environmental control, entertainment, and substitution of motor functions. Since BCIs provide muscle independent communication and control of the environment by circumventing motor pathways, they are considered as assistive technologies for persons with neurological and neurodegenerative...
Probabilistic Approach to Rough Set Theory
Wojciech Ziarko
2006-01-01
The presentation introduces the basic ideas and investigates the probabilistic approach to rough set theory. The major aspects of the probabilistic approach to rough set theory to be explored during the presentation are: the probabilistic view of the approximation space, the probabilistic approximations of sets, as expressed via variable precision and Bayesian rough set models, and probabilistic dependencies between sets and multi-valued attributes, as expressed by the absolute certainty gain and expected certainty gain measures, respectively. The probabilis-tic dependency measures allow for representation of subtle stochastic associations between attributes. They also allow for more comprehensive evaluation of rules computed from data and for computation of attribute reduct, core and significance factors in probabilistic decision tables. It will be shown that the probabilistic dependency measure-based attribute reduction techniques are also extendible to hierarchies of decision tables. The presentation will include computational examples to illustrate pre-sented concepts and to indicate possible practical applications.
Fuzzy sets, rough sets, multisets and clustering
Dahlbom, Anders; Narukawa, Yasuo
2017-01-01
This book is dedicated to Prof. Sadaaki Miyamoto and presents cutting-edge papers in some of the areas in which he contributed. Bringing together contributions by leading researchers in the field, it concretely addresses clustering, multisets, rough sets and fuzzy sets, as well as their applications in areas such as decision-making. The book is divided in four parts, the first of which focuses on clustering and classification. The second part puts the spotlight on multisets, bags, fuzzy bags and other fuzzy extensions, while the third deals with rough sets. Rounding out the coverage, the last part explores fuzzy sets and decision-making.
Dr.Pranita Goswami
2011-01-01
The Partial Fuzzy Set is a portion of the Fuzzy Set which is again a Fuzzy Set. In the Partial Fuzzy Set the baseline is shifted from 0 to 1 to any of its α cuts . In this paper we have fuzzified a portion of the Fuzzy Set by transformation
Matrix element method for high performance computing platforms
Grasseau, G.; Chamont, D.; Beaudette, F.; Bianchini, L.; Davignon, O.; Mastrolorenzo, L.; Ochando, C.; Paganini, P.; Strebler, T.
2015-12-01
Lot of efforts have been devoted by ATLAS and CMS teams to improve the quality of LHC events analysis with the Matrix Element Method (MEM). Up to now, very few implementations try to face up the huge computing resources required by this method. We propose here a highly parallel version, combining MPI and OpenCL, which makes the MEM exploitation reachable for the whole CMS datasets with a moderate cost. In the article, we describe the status of two software projects under development, one focused on physics and one focused on computing. We also showcase their preliminary performance obtained with classical multi-core processors, CUDA accelerators and MIC co-processors. This let us extrapolate that with the help of 6 high-end accelerators, we should be able to reprocess the whole LHC run 1 within 10 days, and that we have a satisfying metric for the upcoming run 2. The future work will consist in finalizing a single merged system including all the physics and all the parallelism infrastructure, thus optimizing implementation for best hardware platforms.
CERN. Geneva
2011-01-01
The past decade has witnessed a momentous transformation in the way people interact with each other. Content is now co-produced, shared, classified, and rated by millions of people, while attention has become the ephemeral and valuable resource that everyone seeks to acquire. This talk will describe how social attention determines the production and consumption of content within both the scientific community and social media, how its dynamics can be used to predict the future and the role that social media plays in setting the public agenda. About the speaker Bernardo Huberman is a Senior HP Fellow and Director of the Social Computing Lab at Hewlett Packard Laboratories. He received his Ph.D. in Physics from the University of Pennsylvania, and is currently a Consulting Professor in the Department of Applied Physics at Stanford University. He originally worked in condensed matter physics, ranging from superionic conductors to two-dimensional superfluids, and made contributions to the theory of critical p...
A new MCNP{trademark} test set
Brockhoff, R.C.; Hendricks, J.S.
1994-09-01
The MCNP test set is used to test the MCNP code after installation on various computer platforms. For MCNP4 and MCNP4A this test set included 25 test problems designed to test as many features of the MCNP code as possible. A new and better test set has been devised to increase coverage of the code from 85% to 97% with 28 problems. The new test set is as fast as and shorter than the MCNP4A test set. The authors describe the methodology for devising the new test set, the features that were not covered in the MCNP4A test set, and the changes in the MCNP4A test set that have been made for MCNP4B and its developmental versions. Finally, new bugs uncovered by the new test set and a compilation of all known MCNP4A bugs are presented.
Bounded Computational Capacity Equilibrium
Hernandez, Penelope
2010-01-01
We study repeated games played by players with bounded computational power, where, in contrast to Abreu and Rubisntein (1988), the memory is costly. We prove a folk theorem: the limit set of equilibrium payoffs in mixed strategies, as the cost of memory goes to 0, includes the set of feasible and individually rational payoffs. This result stands in sharp contrast to Abreu and Rubisntein (1988), who proved that when memory is free, the set of equilibrium payoffs in repeated games played by players with bounded computational power is a strict subset of the set of feasible and individually rational payoffs. Our result emphasizes the role of memory cost and of mixing when players have bounded computational power.
Computer Use and Computer Anxiety in Older Korean Americans.
Yoon, Hyunwoo; Jang, Yuri; Xie, Bo
2016-09-01
Responding to the limited literature on computer use in ethnic minority older populations, the present study examined predictors of computer use and computer anxiety in older Korean Americans. Separate regression models were estimated for computer use and computer anxiety with the common sets of predictors: (a) demographic variables (age, gender, marital status, and education), (b) physical health indicators (chronic conditions, functional disability, and self-rated health), and (c) sociocultural factors (acculturation and attitudes toward aging). Approximately 60% of the participants were computer-users, and they had significantly lower levels of computer anxiety than non-users. A higher likelihood of computer use and lower levels of computer anxiety were commonly observed among individuals with younger age, male gender, advanced education, more positive ratings of health, and higher levels of acculturation. In addition, positive attitudes toward aging were found to reduce computer anxiety. Findings provide implications for developing computer training and education programs for the target population.
An inclusion measure between fuzzy sets
Wang, Jing
2017-01-01
In this paper, we propose a new inclusion measure between fuzzy sets. Firstly, we select an axiomatic definition for the inclusion measure. Then, we present a new computation formula based on the selected axiomatic definition, and demonstrate its two properties. Finally, we give examples to validate its performance. The results show that the new inclusion measure is rational for fuzzy sets.
Set Constraints and Logic Programming (Preprint)
2016-02-24
Assoc Comput Sci Logic Springer September A Aiken D Kozen and E Wimmers Decidability of systems of set constraints with negative...Extensions of Logic Programming ELP volume of Lect Notes Articial Intell pages Springer February J Englefriet Tree... Springer September D Kozen Set constraints and logic programming abstract In JP Jouannaud editor Proc First Conf Constraints in
Fast Sparse Level Sets on Graphics Hardware
Jalba, Andrei C.; Laan, Wladimir J. van der; Roerdink, Jos B.T.M.
2013-01-01
The level-set method is one of the most popular techniques for capturing and tracking deformable interfaces. Although level sets have demonstrated great potential in visualization and computer graphics applications, such as surface editing and physically based modeling, their use for interactive sim
ON THE EXACT HAUSDORFF MEASURE OF A CLASS OF SELF-SIMILAR SETS SATISFYING OPEN SET CONDITION
Shaoyuan Xu; Weiyi Su; Zuoling Zhou
2008-01-01
In this paper,we provide a new effective method for computing the exact value of Hausdorff measures of a class of self-similar sets satisfying the open set condition(OSC).As applications,we discuss a self-similar Cantor set satisfying OSC and give a simple method for computing its exact Hausdorff measure.
Antonio P. BERBER SARDINHA
1999-02-01
Full Text Available This study presents a methodology for the identification of coherent word sets. Eight sets were initially identified and further grouped into two main sets: a `company' set and a `non-company' set. These two sets shared very few collocates, and therefore they seemed to represent distinct topics. The positions of the words in the `company' and `non-company' sets across the text were computed. The results indicated that the `non-company' sets referred to `company' implicitly. Finally, the key words were compared to an automatic abridgment of the text which revealed that nearly all key words were present in the ahridgment. This was interpreted as suggesting that the key words may indeed represent the main contents of the text.Este estudo apresenta uma metodologia para a identificação de conjuntos de palavras coerentes. Oito conjuntos foram identificados inicialmente e posteriormente agrupados em dois conjuntos principais: um conjunto denominado `companhia' e outro denominado `não-companhia'. Estes dois conjuntos partilham alguns colocados, e portanto parecem representar tópicos distintos. A posição das palavras de ambos os conjuntos foi computada ao longo do texto analisado. Os resultados indicaram que os conjuntos `não-companhia' se referiam indiretamente à companhia. Por fim, as palavras-chave dos conjuntos foram comparadas a um resumo do texto automático gerado por computador o qual revelou que quase todas as palavras-chave estavam presentes no resumo. Este fato foi interpretado como indício de que as palavras-chave representam o conteúdo central do texto.
Physicists set new record for network data transfer
2007-01-01
"An international team of physicists, computer scientists, and network engineers joined forces to set new records for sustained data transfer between storage systems durint the SuperComputing 2006 (SC06) Bandwidth Challenge (BWC). (3 pages)
Nor Hashimah Sulaiman
2013-01-01
Full Text Available We introduce a novel concept of multiaspect soft set which is an extension of the ordinary soft set by Molodtsov. Some basic concepts, operations, and properties of the multiaspect soft sets are studied. We also define a mapping on multiaspect soft classes and investigate several properties related to the images and preimages of multiaspect soft sets.
Moncarz, Roger
2000-01-01
Looks at computer engineers and describes their job, employment outlook, earnings, and training and qualifications. Provides a list of resources related to computer engineering careers and the computer industry. (JOW)
Computer-assisted learning and simulation lab with 40 DentSim units.
Welk, A; Maggio, M P; Simon, J F; Scarbecz, M; Harrison, J A; Wicks, R A; Gilpatrick, R O
2008-01-01
There are an increasing number of studies about the computer-assisted dental patient simulator DentSim (DenX, Israel), by which dental students can acquire cognitive motor skills in a multimedia environment. However, only a very few studies have been published dealing with efficient ways to use and to manage a computer-assisted dental simulation lab with 40 DentSim units. The current approach and optimization steps of the College of Dentistry at the University of Tennessee Health Science Center were evaluated based on theoretical and practical tests and by questionnaires (partial 5-point Likert scale). Half of the D1 (first-year) students (2004/05) already had experience with computer-assisted learning at their undergraduate college and most of the students even expected to be taught via computer-assisted learning systems (83.5%) at the dental school. 87.3% of the students working with DentSim found the experience to be very interesting or interesting. Before the students carried out the preparation exercises, they were trained in the skills they needed to work with the sophisticated technology, eg, system-specific operation skills (66.6% attained maximal reachable points) and information searching skills (79.5% attained maximal reachable points). The indirect knowledge retention rate / incidental learning rate of the preparation exercises in the sense of computer-assisted problem-oriented learning regarding anatomy, preparation procedures, and cavity design was promising. The wide- ranging number of prepared teeth needed to acquire the necessary skills shows the varied individual learning curves of the students. The acceptance of, and response to, additional elective training time in the computer-assisted simulation lab were very high. Integrating the DentSim technology into the existing curriculum is a way to improve dental education, but it is also a challenge for both teachers and the students. It requires a shift in both curriculum and instructional goals that
Springer handbook of computational intelligence
Pedrycz, Witold
2015-01-01
This is the first book covering the basics and the state of the art and important applications of the complete growing discipline of computational intelligence. This comprehensive handbook presents a unique synergy of various approaches and new qualities to be gained by using hybrid approaches, incl. inspirations from biology and living organisms and animate systems. The text is organized in 7 main parts foundations, fuzzy sets, rough sets, evolutionary computation, neural networks, swarm intelligence and hybrid computational intelligence systems.
Cook, Perry R.
This chapter covers algorithms, technologies, computer languages, and systems for computer music. Computer music involves the application of computers and other digital/electronic technologies to music composition, performance, theory, history, and the study of perception. The field combines digital signal processing, computational algorithms, computer languages, hardware and software systems, acoustics, psychoacoustics (low-level perception of sounds from the raw acoustic signal), and music cognition (higher-level perception of musical style, form, emotion, etc.).
Programming in biomolecular computation
Hartmann, Lars Røeboe; Jones, Neil; Simonsen, Jakob Grue
2010-01-01
executable, but are also compilable and interpretable. It is universal: all computable functions can be computed (in natural ways and without arcane encodings of data and algorithm); it is also uniform: new “hardware” is not needed to solve new problems; and (last but not least) it is Turing complete......Our goal is to provide a top-down approach to biomolecular computation. In spite of widespread discussion about connections between biology and computation, one question seems notable by its absence: Where are the programs? We introduce a model of computation that is evidently programmable......, by programs reminiscent of low-level computer machine code; and at the same time biologically plausible: its functioning is defined by a single and relatively small set of chemical-like reaction rules. Further properties: the model is stored-program: programs are the same as data, so programs are not only...
Programming in Biomolecular Computation
Hartmann, Lars; Jones, Neil; Simonsen, Jakob Grue
2010-01-01
Our goal is to provide a top-down approach to biomolecular computation. In spite of widespread discussion about connections between biology and computation, one question seems notable by its absence: Where are the programs? We introduce a model of computation that is evidently programmable......, by programs reminiscent of low-level computer machine code; and at the same time biologically plausible: its functioning is defined by a single and relatively small set of chemical-like reaction rules. Further properties: the model is stored-program: programs are the same as data, so programs are not only...... in a strong sense: a universal algorithm exists, that is able to execute any program, and is not asymptotically inefficient. A prototype model has been implemented (for now in silico on a conventional computer). This work opens new perspectives on just how computation may be specified at the biological level....
Asharani Shinde
2015-10-01
Full Text Available This document gives an insight into Cloud Computing giving an overview of key features as well as the detail study of exact working of Cloud computing. Cloud Computing lets you access all your application and documents from anywhere in the world, freeing you from the confines of the desktop thus making it easier for group members in different locations to collaborate. Certainly cloud computing can bring about strategic, transformational and even revolutionary benefits fundamental to future enterprise computing but it also offers immediate and pragmatic opportunities to improve efficiencies today while cost effectively and systematically setting the stage for the strategic change. As this technology makes the computing, sharing, networking easy and interesting, we should think about the security and privacy of information too. Thus the key points we are going to be discussed are what is cloud, what are its key features, current applications, future status and the security issues and the possible solutions.
Miliordos, Evangelos; Xantheas, Sotiris S
2015-06-21
We report MP2 and Coupled Cluster Singles, Doubles, and perturbative Triples [CCSD(T)] binding energies with basis sets up to pentuple zeta quality for the (H2O)m=2-6,8 water clusters. Our best CCSD(T)/Complete Basis Set (CBS) estimates are -4.99 ± 0.04 kcal/mol (dimer), -15.8 ± 0.1 kcal/mol (trimer), -27.4 ± 0.1 kcal/mol (tetramer), -35.9 ± 0.3 kcal/mol (pentamer), -46.2 ± 0.3 kcal/mol (prism hexamer), -45.9 ± 0.3 kcal/mol (cage hexamer), -45.4 ± 0.3 kcal/mol (book hexamer), -44.3 ± 0.3 kcal/mol (ring hexamer), -73.0 ± 0.5 kcal/mol (D2d octamer), and -72.9 ± 0.5 kcal/mol (S4 octamer). We have found that the percentage of both the uncorrected (De) and basis set superposition error-corrected (De (CP)) binding energies recovered with respect to the CBS limit falls into a narrow range on either sides of the CBS limit for each basis set for all clusters. In addition, this range decreases upon increasing the basis set. Relatively accurate estimates (within set) or the "12, 12" (for the AVTZ, AVQZ, and AV5Z sets) mixing ratio between De and De (CP). These mixing rations are determined via a least-mean-squares approach from a dataset that encompasses clusters of various sizes. Based on those findings, we propose an accurate and efficient computational protocol that can be presently used to estimate accurate binding energies of water clusters containing up to 30 molecules (for CCSD(T)) and up to 100 molecules (for MP2).
Barlocco, Daniela; Cignarella, Giorgio; Greco, Giovanni; Novellino, Ettore
1993-10-01
Molecular modeling studies were carried out on a set of piperazine and 3,8-diazabicyclo[3.2.1]octane derivatives with the aim to highlight the main factors modulating their affinity for the μ-opioid receptor. Structure-affinity relationships were developed with the aid of molecular mechanics and semiempirical quantum-mechanics methods. According to our proposed pharmacodynamic model, the binding to the μ-receptor is promoted by the following physico-chemical features: the presence of hydrocarbon fragments on the nitrogen ring frame capable of interacting with one of two hypothesized hydrophobic receptor pockets; a `correct' orientation of an N-propionyl side chain so as to avoid a sterically hindered region of the receptor; the possibility of accepting a hydrogen bond from a receptor site complementary to the morphine phenol oxygen.
Bruno Barras
2010-01-01
Full Text Available This work is about formalizing models of various type theories of the Calculus of Constructions family. Here we focus on set theoretical models. The long-term goal is to build a formal set theoretical model of the Calculus of Inductive Constructions, so we can be sure that Coq is consistent with the language used by most mathematicians.One aspect of this work is to axiomatize several set theories: ZF possibly with inaccessible cardinals, and HF, the theory of hereditarily finite sets. On top of these theories we have developped a piece of the usual set theoretical construction of functions, ordinals and fixpoint theory. We then proved sound several models of the Calculus of Constructions, its extension with an infinite hierarchy of universes, and its extension with the inductive type of natural numbers where recursion follows the type-based termination approach.The other aspect is to try and discharge (most of these assumptions. The goal here is rather to compare the theoretical strengths of all these formalisms. As already noticed by Werner, the replacement axiom of ZF in its general form seems to require a type-theoretical axiom of choice (TTAC.
Convex Hulls of Algebraic Sets
Gouveia, João
2010-01-01
This article describes a method to compute successive convex approximations of the convex hull of a set of points in R^n that are the solutions to a system of polynomial equations over the reals. The method relies on sums of squares of polynomials and the dual theory of moment matrices. The main feature of the technique is that all computations are done modulo the ideal generated by the polynomials defining the set to the convexified. This work was motivated by questions raised by Lov\\'asz concerning extensions of the theta body of a graph to arbitrary real algebraic varieties, and hence the relaxations described here are called theta bodies. The convexification process can be seen as an incarnation of Lasserre's hierarchy of convex relaxations of a semialgebraic set in R^n. When the defining ideal is real radical the results become especially nice. We provide several examples of the method and discuss convergence issues. Finite convergence, especially after the first step of the method, can be described expl...
Pegden, Wesley
2011-01-01
The erosion of a set in Euclidean space by a radius r>0 is the subset of X consisting of points at distance >/-r from the complement of X. A set is resilient to erosion if it is similar to its erosion by some positive radius. We give a somewhat surprising characterization of resilient sets, consisting in one part of simple geometric constraints on convex resilient sets, and, in another, a correspondence between nonconvex resilient sets and scale-invariant (e.g., 'exact fractal') sets.
Morozov, Albert D; Dragunov, Timothy N; Malysheva, Olga V
1999-01-01
This book deals with the visualization and exploration of invariant sets (fractals, strange attractors, resonance structures, patterns etc.) for various kinds of nonlinear dynamical systems. The authors have created a special Windows 95 application called WInSet, which allows one to visualize the invariant sets. A WInSet installation disk is enclosed with the book.The book consists of two parts. Part I contains a description of WInSet and a list of the built-in invariant sets which can be plotted using the program. This part is intended for a wide audience with interests ranging from dynamical
高振桥
2002-01-01
If you work with a computer,it is certain that you can not avoid dealing, with at least one computer virus.But how much do you know about it? Well,actually,a computer virus is not a biological' one as causes illnesses to people.It is a kind of computer program
2016-05-01
A computing grid interconnects resources such as high performancecomputers, scientific databases, and computercontrolledscientific instruments of cooperating organizationseach of which is autonomous. It precedes and is quitedifferent from cloud computing, which provides computingresources by vendors to customers on demand. In this article,we describe the grid computing model and enumerate themajor differences between grid and cloud computing.
Spatial Computing and Spatial Practices
Brodersen, Anders; Büsher, Monika; Christensen, Michael;
2007-01-01
The gathering momentum behind the research agendas of pervasive, ubiquitous and ambient computing, set in motion by Mark Weiser (1991), offer dramatic opportunities for information systems design. They raise the possibility of "putting computation where it belongs" by exploding computing power out...... the "disappearing computer" we have, therefore, carried over from previous research an interdisciplinary perspective, and a focus on the sociality of action (Suchman 1987)....
U.S. Department of Health & Human Services — The VSAC provides downloadable access to all official versions of vocabulary value sets contained in the 2014 Clinical Quality Measures (CQMs). Each value set...
National Oceanic and Atmospheric Administration, Department of Commerce — This set expands the topics included in Set 1 and includes (in addition to landslides) rockfalls, rock avalanches, mud flows, debris flows, slumps, creep, and...
Department of Transportation — The Altimeter Setting Indicator (ASI) is an aneroid system used at airports to provide an altimeter setting for aircraft altimeters. This indicator may be an analog...
Settings for Suicide Prevention
... out more about suicide prevention in particular settings. American Indian/Alaska Native Settings Behavioral Health Care Inpatient Mental Health Outpatient Mental Health Substance Abuse Treatment Colleges and Universities Communities Crisis Centers/ ...
Ulmann, Bernd
2013-01-01
This book is a comprehensive introduction to analog computing. As most textbooks about this powerful computing paradigm date back to the 1960s and 1970s, it fills a void and forges a bridge from the early days of analog computing to future applications. The idea of analog computing is not new. In fact, this computing paradigm is nearly forgotten, although it offers a path to both high-speed and low-power computing, which are in even more demand now than they were back in the heyday of electronic analog computers.
Hussain, Shazia; Taylor, Martina; Waltermaurer, Eve; McCauley, Jeanne; Ford, Daniel E; Campbell, Jacquelyn C; McNutt, Louise-Anne
2007-07-01
Obesity, a major public health problem, is the key modifiable component of diabetes risk. Addressing obesity and diabetes risk during primary care visits is recommended but, because of time constraints, is often difficult for health care providers to do. The purpose of this study was to determine whether technology can streamline risk assessment and leave more time to educate patients. We also tested the validity of self-reported weight in assessing diabetes risk. We recruited English-speaking women aged 18 to 44 years who came to a clinic for medical appointments from July through October 2003. Study participants completed a self-administered computer questionnaire that collected the following data: weight, height, family history of diabetes, level of exercise, amount of television time, and daily servings of fruits and vegetables. Self-reported and scale-measured weights were compared to determine the effect of self-reported weight on results of the American Diabetes Association's Diabetes Risk Test (DRT). In determining the sensitivity and specificity of self-reported weight, we used scale measurements as the standard. Complete data were collected on 231 women, including 214 women without a history of a diabetes diagnosis. Compared with DRT results (determined by scale-measured weight), questionnaire results (determined by self-reported weight) had sensitivities of 93.9% (95% confidence interval [CI], 85.2%-97.6%) for high risk for diabetes and 90.4% (95% CI, 83.3%-94.7%) for moderate risk. The specificity of the self-administered DRT for any diabetes risk was 97.8% (95% CI, 88.4%-99.6%). About half the women reported discussing nutrition and exercise with their health care providers Health care professionals can provide personalized diabetes education and counseling on the basis of information collected by self-administered computerized questionnaires. In general, patients provided a self-reported weight that did not substantially bias estimates of diabetes
Barthel, D; Fischer, K I; Nolte, S; Otto, C; Meyrose, A-K; Reisinger, S; Dabs, M; Thyen, U; Klein, M; Muehlan, H; Ankermann, T; Walter, O; Rose, M; Ravens-Sieberer, U
2016-03-01
To describe the implementation process of a computer-adaptive test (CAT) for measuring health-related quality of life (HRQoL) of children and adolescents in two pediatric clinics in Germany. The study focuses on the feasibility and user experience with the Kids-CAT, particularly the patients' experience with the tool and the pediatricians' experience with the Kids-CAT Report. The Kids-CAT was completed by 312 children and adolescents with asthma, diabetes or rheumatoid arthritis. The test was applied during four clinical visits over a 1-year period. A feedback report with the test results was made available to the pediatricians. To assess both feasibility and acceptability, a multimethod research design was used. To assess the patients' experience with the tool, the children and adolescents completed a questionnaire. To assess the clinicians' experience, two focus groups were conducted with eight pediatricians. The children and adolescents indicated that the Kids-CAT was easy to complete. All pediatricians reported that the Kids-CAT was straightforward and easy to understand and integrate into clinical practice; they also expressed that routine implementation of the tool would be desirable and that the report was a valuable source of information, facilitating the assessment of self-reported HRQoL of their patients. The Kids-CAT was considered an efficient and valuable tool for assessing HRQoL in children and adolescents. The Kids-CAT Report promises to be a useful adjunct to standard clinical care with the potential to improve patient-physician communication, enabling pediatricians to evaluate and monitor their young patients' self-reported HRQoL.
Vallgårda, Anna K. A.; Redström, Johan
2007-01-01
Computational composite is introduced as a new type of composite material. Arguing that this is not just a metaphorical maneuver, we provide an analysis of computational technology as material in design, which shows how computers share important characteristics with other materials used in design...... and architecture. We argue that the notion of computational composites provides a precise understanding of the computer as material, and of how computations need to be combined with other materials to come to expression as material. Besides working as an analysis of computers from a designer’s point of view......, the notion of computational composites may also provide a link for computer science and human-computer interaction to an increasingly rapid development and use of new materials in design and architecture....
2000-01-01
Computational chemistry has come of age. With significant strides in computer hardware and software over the last few decades, computational chemistry has achieved full partnership with theory and experiment as a tool for understanding and predicting the behavior of a broad range of chemical, physical, and biological phenomena. The Nobel Prize award to John Pople and Walter Kohn in 1998 highlighted the importance of these advances in computational chemistry. With massively parallel computers ...
Suppes, Patrick
1972-01-01
This clear and well-developed approach to axiomatic set theory is geared toward upper-level undergraduates and graduate students. It examines the basic paradoxes and history of set theory and advanced topics such as relations and functions, equipollence, finite sets and cardinal numbers, rational and real numbers, and other subjects. 1960 edition.
Rodríguez, J. Tinguaro; Franco de los Ríos, Camilo; Gómez, Daniel
2015-01-01
In this paper we want to stress the relevance of paired fuzzy sets, as already proposed in previous works of the authors, as a family of fuzzy sets that offers a unifying view for different models based upon the opposition of two fuzzy sets, simply allowing the existence of different types...
Baker, Mark; Beltran, Jane; Buell, Jason; Conrey, Brian; Davis, Tom; Donaldson, Brianna; Detorre-Ozeki, Jeanne; Dibble, Leila; Freeman, Tom; Hammie, Robert; Montgomery, Julie; Pickford, Avery; Wong, Justine
2013-01-01
Sets in the game "Set" are lines in a certain four-dimensional space. Here we introduce planes into the game, leading to interesting mathematical questions, some of which we solve, and to a wonderful variation on the game "Set," in which every tableau of nine cards must contain at least one configuration for a player to pick up.
Said Broumi; Florentin Smarandache; Mamoni Dhar
2013-01-01
Both neutrosophic sets theory and rough sets theory are emerging as powerful tool for managing uncertainty, indeterminate, incomplete and imprecise information. In this paper we develop an hybrid structure called rough neutrosophic sets and studied their properties.
Baker, Mark; Beltran, Jane; Buell, Jason; Conrey, Brian; Davis, Tom; Donaldson, Brianna; Detorre-Ozeki, Jeanne; Dibble, Leila; Freeman, Tom; Hammie, Robert; Montgomery, Julie; Pickford, Avery; Wong, Justine
2013-01-01
Sets in the game "Set" are lines in a certain four-dimensional space. Here we introduce planes into the game, leading to interesting mathematical questions, some of which we solve, and to a wonderful variation on the game "Set," in which every tableau of nine cards must contain at least one configuration for a player to pick up.
Beeping a Maximal Independent Set
Afek, Yehuda; Alon, Noga; Bar-Joseph, Ziv; Cornejo, Alejandro; Haeupler, Bernhard; Kuhn, Fabian
2012-01-01
We consider the problem of computing a maximal independent set (MIS) in an extremely harsh broadcast model that relies only on carrier sensing. The model consists of an anonymous broadcast network in which nodes have no knowledge about the topology of the network or even an upper bound on its size. Furthermore, it is assumed that an adversary chooses at which time slot each node wakes up. At each time slot a node can either beep, that is, emit a signal, or be silent. At a particular time slot...
张涛; 刘晓华; 江亿
2011-01-01
A model is established to analyze the heat and moisture transfer process in air-water or-hydroscopic solution system, two independent driving forces each other: enthalpy difference and relative humidity difference can be obtained. A triangle region, consisting of iso-enthalpy line from air inlet, saturation line of moisture air (iso-concentration line of solution) and connecting line of air and water or solution at inlet status, is determined in enthalpy-psychrometric chart based on relationship between the two driving forces. The region is a reachable handling region for heat and moisture transfer process in air-water or-hydroscopic solution system, because the variation of air outlet status is limited in the region whatever change of mass and heat transfer coefficients or flow pattern and rate. Experimental results of dehumidification and regeneration in literatures are analyzed with the help of reachable handling region.%利用模型研究空气与水或吸湿溶液热湿传递过程的特性,得到两个相互独立的驱动力——焓差驱动力和相对湿度差驱动力.根据两驱动力的关系在焓湿图上确定了一个由空气进口等焓线、湿空气饱和线(或溶液等浓度线)、空气与水或溶液进口状态连线围成的三角形区域,不论流型、传热传质系数或流量如何变化,空气出口状态只能在该区域内变化,即为空气与水或吸湿溶液热湿传递过程的可及处理区域.应用可及处理区域分析了文献中除湿实验、再生实验的热湿处理结果,明确了实验装置所处的性能水平.
Finding Similar/Diverse Solutions in Answer Set Programming
Eiter, Thomas; Erdogan, Halit; Fink, Michael
2011-01-01
For some computational problems (e.g., product configuration, planning, diagnosis, query answering, phylogeny reconstruction) computing a set of similar/diverse solutions may be desirable for better decision-making. With this motivation, we studied several decision/optimization versions of this problem in the context of Answer Set Programming (ASP), analyzed their computational complexity, and introduced offline/online methods to compute similar/diverse solutions of such computational problems with respect to a given distance function. All these methods rely on the idea of computing solutions to a problem by means of finding the answer sets for an ASP program that describes the problem. The offline methods compute all solutions in advance using the ASP formulation of the problem with an ASP solver, like Clasp, and then identify similar/diverse solutions using clustering methods. The online methods compute similar/diverse solutions following one of the three approaches: by reformulating the ASP representation ...
Enderton, Herbert B
1977-01-01
This is an introductory undergraduate textbook in set theory. In mathematics these days, essentially everything is a set. Some knowledge of set theory is necessary part of the background everyone needs for further study of mathematics. It is also possible to study set theory for its own interest--it is a subject with intruiging results anout simple objects. This book starts with material that nobody can do without. There is no end to what can be learned of set theory, but here is a beginning.
Sets avoiding integral distances
Kurz, Sascha
2012-01-01
We study open point sets in Euclidean spaces $\\mathbb{R}^d$ without a pair of points an integral distance apart. By a result of Furstenberg, Katznelson, and Weiss such sets must be of Lebesgue upper density zero. We are interested in how large such sets can be in $d$-dimensional volume. We determine the lower and upper bounds for the volumes of the sets in terms of the number of their connected components and dimension, and also give some exact values. Our problem can be viewed as a kind of inverse to known problems on sets with pairwise rational or integral distances.
Generalization Rough Set Theory
XIAO Di; ZHANG Jun-feng; HU Shou-song
2008-01-01
In order to avoid the discretization in the classical rough set theory, a generlization rough set theory is proposed.At first, the degree of general importance of an attribute and attribute subsets are presented.Then, depending on the degree of general importance of attribute, the space distance can be measured with weighted method.At last, a generalization rough set theory based on the general near neighborhood relation is proposed.The proposed theory partitions the universe into the tolerant modules, and forms lower approximation and upper approximation of the set under general near neighborhood relationship, which avoids the discretization in Pawlak's rough set theory.
Neuroscience, brains, and computers
Giorno Maria Innocenti
2013-07-01
Full Text Available This paper addresses the role of the neurosciences in establishing what the brain is and how states of the brain relate to states of the mind. The brain is viewed as a computational deviceperforming operations on symbols. However, the brain is a special purpose computational devicedesigned by evolution and development for survival and reproduction, in close interaction with theenvironment. The hardware of the brain (its structure is very different from that of man-made computers.The computational style of the brain is also very different from traditional computers: the computationalalgorithms, instead of being sets of external instructions, are embedded in brain structure. Concerningthe relationships between brain and mind a number of questions lie ahead. One of them is why andhow, only the human brain grasped the notion of God, probably only at the evolutionary stage attainedby Homo sapiens.
2007-01-01
The 2007 CERN School of Computing, organised by CERN in collaboration with the University of Split (FESB) will be held from 20 to 31 August 2007 in Dubrovnik, Croatia. It is aimed at postgraduate students and research workers with a few years' experience in scientific physics, computing or related fields. Special themes this year are: GRID Technologies: The Grid track delivers unique theoretical and hands-on education on some of the most advanced GRID topics; Software Technologies: The Software track addresses some of the most relevant modern techniques and tools for large scale distributed software development and handling as well as for computer security; Physics Computing: The Physics Computing track focuses on informatics topics specific to the HEP community. After setting-the-scene lectures, it addresses data acquisition and ROOT. Grants from the European Union Framework Programme 6 (FP6) are available to participants to cover part or all of the cost of the School. More information can be found at...
Fostering Computational Thinking
Caballero, Marcos D; Schatz, Michael F
2011-01-01
Students taking introductory physics are rarely exposed to computational modeling. In a one-semester large lecture introductory calculus-based mechanics course at Georgia Tech, students learned to solve physics problems using the VPython programming environment. During the term 1357 students in this course solved a suite of fourteen computational modeling homework questions delivered using an online commercial course management system. Their proficiency with computational modeling was evaluated in a proctored environment using a novel central force problem. The majority of students (60.4%) successfully completed the evaluation. Analysis of erroneous student-submitted programs indicated that a small set of student errors explained why most programs failed. We discuss the design and implementation of the computational modeling homework and evaluation, the results from the evaluation and the implications for instruction in computational modeling in introductory STEM courses.
Unconditionally verifiable blind computation
Fitzsimons, Joseph F
2012-01-01
Blind Quantum Computing (BQC) allows a client to have a server carry out a quantum computation for them such that the client's input, output and computation remain private. Recently the authors together with Broadbent proposed a universal unconditionally secure BQC scheme where the client only needs to be able to prepare single qubits in separable states randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. A desirable property for any BQC protocol is verification, whereby the client can verify with high probability whether the server has followed the instructions of the protocol, or if there has been some deviation resulting in a corrupted output state. A verifiable BQC protocol can be viewed as an interactive proof system leading to consequences for complexity theory. In this paper we extend the BQC protocol presented in [Broadbent, Fitzsimons and Kashefi, FOCS 2009 p517] with new functionality allowing blind computational basis m...
Arrighi, P; Arrighi, Pablo; Salvail, Louis
2003-01-01
We investigate the possibility of having someone carry out the work of executing a function for you, but without letting him learn anything about your input. Say Alice wants Bob to compute some well-known function f upon her input x, but wants to prevent Bob from learning anything about x. The situation arises for instance if client Alice has limited computational resources in comparison with mistrusted server Bob, or if x is an inherently mobile piece of data. Could there be a protocol whereby Bob is forced to compute f(x) "blindly", i.e. without observing x? We provide such a blind computation protocol for the class of functions which admit an efficient procedure to generate random input-output pairs, e.g. factorization. The setting is quantum, the security is unconditional, the eavesdropper is as malicious as can be. Keywords: Secure Circuit Evaluation, Secure Two-party Computation, Information Hiding, Information gain vs disturbance.
Duality Computing in Quantum Computers
LONG Gui-Lu; LIU Yang
2008-01-01
In this letter, we propose a duality computing mode, which resembles particle-wave duality property when a quantum system such as a quantum computer passes through a double-slit. In this mode, computing operations are not necessarily unitary. The duality mode provides a natural link between classical computing and quantum computing. In addition, the duality mode provides a new tool for quantum algorithm design.
无
2002-01-01
This paper presents a general framework for computational manufacturing. The methodology of computational manufacturing aims at integrating computational geometry, machining principle, sensor information fusion, optimization, computational intelligence and virtual prototyping to solve problems of the modeling, reasoning, control, planning and scheduling of manufacturing processes and systems. There are three typical problems in computational manufacturing, i.e., scheduling (time-domain), geometric reasoning (space-domain) and decision- making (interaction between time-domain and space-domain). Some theoretical fundamentals of computational manufacturing are also discussed.
Linearization functors on real convex sets
Velasco, Mauricio
2012-01-01
We prove that linearizing certain families of polynomial optimization problems leads to new functorial operations in real convex sets. We show that under some conditions these operations can be computed or approximated in ways amenable to efficient computation. These operations are convex analogues of Hom functors, tensor products, symmetric powers, exterior powers and general Schur functors on vector spaces and lead to novel constructions even for polyhedra.
On Software Development of Characteristic Set Method
WU Yong-wei; WANG Ding-kang; YANG Hong; LIN Dong-dai
2002-01-01
Characteristic set method of polynomial equation solving has been widely spread and its implementation in software has been urged to consider in recent years. Several packages for the method are implemented in some computer algebra systems, such as REDUCE and Maple. In order to improve the efficiency of the method, we have developed a computer algebra system "ELIMINO" written in C language and implemented on Linux operation system on a PC. The authors wish to share with the reader the knowledge and experiences about the design and development of software package of the characteristic set method.
Porzel, Robert
2011-01-01
This book uses the latest in knowledge representation and human-computer interaction to address the problem of contextual computing in artificial intelligence. It uses high-level context to solve some challenging problems in natural language understanding.
Pavelle, Richard; And Others
1981-01-01
Describes the nature and use of computer algebra and its applications to various physical sciences. Includes diagrams illustrating, among others, a computer algebra system and flow chart of operation of the Euclidean algorithm. (SK)
Siebert, B.R.L.; Thomas, R.H.
1996-01-01
The paper presents a definition of the term ``Computational Dosimetry`` that is interpreted as the sub-discipline of computational physics which is devoted to radiation metrology. It is shown that computational dosimetry is more than a mere collection of computational methods. Computational simulations directed at basic understanding and modelling are important tools provided by computational dosimetry, while another very important application is the support that it can give to the design, optimization and analysis of experiments. However, the primary task of computational dosimetry is to reduce the variance in the determination of absorbed dose (and its related quantities), for example in the disciplines of radiological protection and radiation therapy. In this paper emphasis is given to the discussion of potential pitfalls in the applications of computational dosimetry and recommendations are given for their avoidance. The need for comparison of calculated and experimental data whenever possible is strongly stressed.
Nygaard, Jens Vinge
2017-01-01
The Health Technology Program at Aarhus University applies computational biology to investigate the heterogeneity of tumours......The Health Technology Program at Aarhus University applies computational biology to investigate the heterogeneity of tumours...
Li, Shu-Shen; Long, Gui-lu; Bai, Feng-Shan; Feng, Song-Lin; Zheng, Hou-Zhi
2001-01-01
Quantum computing is a quickly growing research field. This article introduces the basic concepts of quantum computing, recent developments in quantum searching, and decoherence in a possible quantum dot realization.
K. Shalini
2013-01-01
Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.
Turner, Raymond
2009-01-01
Computational models can be found everywhere in present day science and engineering. In providing a logical framework and foundation for the specification and design of specification languages, Raymond Turner uses this framework to introduce and study computable models. In doing so he presents the first systematic attempt to provide computational models with a logical foundation. Computable models have wide-ranging applications from programming language semantics and specification languages, through to knowledge representation languages and formalism for natural language semantics. They are al
Brier, Søren
2014-01-01
Open peer commentary on the article “Info-computational Constructivism and Cognition” by Gordana Dodig-Crnkovic. Upshot: The main problems with info-computationalism are: (1) Its basic concept of natural computing has neither been defined theoretically or implemented practically. (2. It cannot en...... cybernetics and Maturana and Varela’s theory of autopoiesis, which are both erroneously taken to support info-computationalism....
Computing fundamentals introduction to computers
Wempen, Faithe
2014-01-01
The absolute beginner's guide to learning basic computer skills Computing Fundamentals, Introduction to Computers gets you up to speed on basic computing skills, showing you everything you need to know to conquer entry-level computing courses. Written by a Microsoft Office Master Instructor, this useful guide walks you step-by-step through the most important concepts and skills you need to be proficient on the computer, using nontechnical, easy-to-understand language. You'll start at the very beginning, getting acquainted with the actual, physical machine, then progress through the most common
Quantum Computing for Computer Architects
Metodi, Tzvetan
2011-01-01
Quantum computers can (in theory) solve certain problems far faster than a classical computer running any known classical algorithm. While existing technologies for building quantum computers are in their infancy, it is not too early to consider their scalability and reliability in the context of the design of large-scale quantum computers. To architect such systems, one must understand what it takes to design and model a balanced, fault-tolerant quantum computer architecture. The goal of this lecture is to provide architectural abstractions for the design of a quantum computer and to explore
J. A. Tenreiro Machado
2017-02-01
Full Text Available Complex systems (CS involve many elements that interact at different scales in time and space. The challenges in modeling CS led to the development of novel computational tools with applications in a wide range of scientific areas. The computational problems posed by CS exhibit intrinsic difficulties that are a major concern in Computational Complexity Theory. [...
Vallgårda, Anna K. A.
of the new microprocessors and network technologies. However, the understanding of the computer represented within this program poses a challenge for the intentions of the program. The computer is understood as a multitude of invisible intelligent information devices which confines the computer as a tool...
Ryland, Jane N.
1988-01-01
The microcomputer revolution, in which small and large computers have gained tremendously in capability, has created a distributed computing environment. This circumstance presents administrators with the opportunities and the dilemmas of choosing appropriate computing resources for each situation. (Author/MSE)
Wechsler, Harry
1990-01-01
The book is suitable for advanced courses in computer vision and image processing. In addition to providing an overall view of computational vision, it contains extensive material on topics that are not usually covered in computer vision texts (including parallel distributed processing and neural networks) and considers many real applications.
Brier, Søren
2014-01-01
Open peer commentary on the article “Info-computational Constructivism and Cognition” by Gordana Dodig-Crnkovic. Upshot: The main problems with info-computationalism are: (1) Its basic concept of natural computing has neither been defined theoretically or implemented practically. (2. It cannot en...
Drenning, Susan; Getz, Lou
1992-01-01
Computer Ease is an intergenerational program designed to put an Ohio elementary school's computer lab, software library, staff, and students at the disposal of older adults desiring to become computer literate. Three 90-minute instructional sessions allow seniors to experience 1-to-1 high-tech instruction by enthusiastic, nonthreatening…
Optimized Set of RST Moment Invariants
Khalid M. Hosny
2008-01-01
Full Text Available Moment invariants are widely used in image processing, pattern recognition and computer vision. Several methods and algorithms have been proposed for fast and efficient calculation of moment's invariants where numerical approximation errors are involved in most of these methods. In this paper, an optimized set of moment invariants with respect to rotation, scaling and translation is presented. An accurate method is used for exact computation of moment invariants for gray level images. A fast algorithm is applied to accelerate the process of computation. Error analysis is presented and a comparison with other conventional methods is performed. The obtained results explain the superiority of the proposed method.
Set Reconciliation in Two Rounds of Communication
2014-06-01
Communications,%and%Information%Networks% Alternate%Topic%2:% Modeling %and%Simulation% Name%of%Author(s):%Ryan%Gabrys%and%Ayodeji%Coker% POC%Name:%Ryan...STACS, 1990. [12] Y. Minsky and A. Trachtenberg, “Practical set reconciliation,” Tech. Rep., Department of Electrical and Computer Engineering...Boston Uni- versity, 2002. [13] Y. Minsky , A. Trachtenberg, R. Zippel, “Set reconciliation with nearly optimal communication complexity,” IEEE Trans
Applications of interval computations
Kreinovich, Vladik
1996-01-01
Primary Audience for the Book • Specialists in numerical computations who are interested in algorithms with automatic result verification. • Engineers, scientists, and practitioners who desire results with automatic verification and who would therefore benefit from the experience of suc cessful applications. • Students in applied mathematics and computer science who want to learn these methods. Goal Of the Book This book contains surveys of applications of interval computations, i. e. , appli cations of numerical methods with automatic result verification, that were pre sented at an international workshop on the subject in EI Paso, Texas, February 23-25, 1995. The purpose of this book is to disseminate detailed and surveyed information about existing and potential applications of this new growing field. Brief Description of the Papers At the most fundamental level, interval arithmetic operations work with sets: The result of a single arithmetic operation is the set of all possible results as the o...
LiMin
2003-01-01
All mineral mining in China now has a set road to follow, and straying off its path will attract severe penalties. The country's first-round programs for provincial mineral resources exploitation took effect in mid-January, setting output goals and designating mining regions.
Hansen, Peter Reinhard; Lunde, Asger; Nason, James M.
The paper introduces the model confidence set (MCS) and applies it to the selection of models. A MCS is a set of models that is constructed such that it will contain the best model with a given level of confidence. The MCS is in this sense analogous to a confidence interval for a parameter. The M...
Moschovakis, YN
1987-01-01
Now available in paperback, this monograph is a self-contained exposition of the main results and methods of descriptive set theory. It develops all the necessary background material from logic and recursion theory, and treats both classical descriptive set theory and the effective theory developed by logicians.
CERN. Geneva
2008-01-01
What if people could play computer games and accomplish work without even realizing it? What if billions of people collaborated to solve important problems for humanity or generate training data for computers? My work aims at a general paradigm for doing exactly that: utilizing human processing power to solve computational problems in a distributed manner. In particular, I focus on harnessing human time and energy for addressing problems that computers cannot yet solve. Although computers have advanced dramatically in many respects over the last 50 years, they still do not possess the basic conceptual intelligence or perceptual capabilities...
Blum, Edward K
2011-01-01
Computer Science: The Hardware, Software and Heart of It focuses on the deeper aspects of the two recognized subdivisions of Computer Science, Software and Hardware. These subdivisions are shown to be closely interrelated as a result of the stored-program concept. Computer Science: The Hardware, Software and Heart of It includes certain classical theoretical computer science topics such as Unsolvability (e.g. the halting problem) and Undecidability (e.g. Godel's incompleteness theorem) that treat problems that exist under the Church-Turing thesis of computation. These problem topics explain in
Bioinformatics and Computational Core Technology Center
Federal Laboratory Consortium — SERVICES PROVIDED BY THE COMPUTER CORE FACILITYEvaluation, purchase, set up, and maintenance of the computer hardware and network for the 170 users in the research...
Bioinformatics and Computational Core Technology Center
Federal Laboratory Consortium — SERVICES PROVIDED BY THE COMPUTER CORE FACILITY Evaluation, purchase, set up, and maintenance of the computer hardware and network for the 170 users in the research...
Economic communication model set
Zvereva, Olga M.; Berg, Dmitry B.
2017-06-01
This paper details findings from the research work targeted at economic communications investigation with agent-based models usage. The agent-based model set was engineered to simulate economic communications. Money in the form of internal and external currencies was introduced into the models to support exchanges in communications. Every model, being based on the general concept, has its own peculiarities in algorithm and input data set since it was engineered to solve the specific problem. Several and different origin data sets were used in experiments: theoretic sets were estimated on the basis of static Leontief's equilibrium equation and the real set was constructed on the basis of statistical data. While simulation experiments, communication process was observed in dynamics, and system macroparameters were estimated. This research approved that combination of an agent-based and mathematical model can cause a synergetic effect.
Computer Science Research: Computation Directorate
Durst, M.J. (ed.); Grupe, K.F. (ed.)
1988-01-01
This report contains short papers in the following areas: large-scale scientific computation; parallel computing; general-purpose numerical algorithms; distributed operating systems and networks; knowledge-based systems; and technology information systems.
Rosenthal, L E
1986-10-01
Software is the component in a computer system that permits the hardware to perform the various functions that a computer system is capable of doing. The history of software and its development can be traced to the early nineteenth century. All computer systems are designed to utilize the "stored program concept" as first developed by Charles Babbage in the 1850s. The concept was lost until the mid-1940s, when modern computers made their appearance. Today, because of the complex and myriad tasks that a computer system can perform, there has been a differentiation of types of software. There is software designed to perform specific business applications. There is software that controls the overall operation of a computer system. And there is software that is designed to carry out specialized tasks. Regardless of types, software is the most critical component of any computer system. Without it, all one has is a collection of circuits, transistors, and silicone chips.
Smith, Paul H.
1988-01-01
The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.
Computer Workstation: Pointer/Mouse
... Safety and Health Program Recommendations It's the Law Poster REGULATIONS Law and Regulations Standard Interpretations Training Requirements ... when evaluating your computer workstation. Pointer Placement Pointer Size, Shape, and Settings Pointer/Mouse Quick Tips Keep ...
Computing fundamentals digital literacy edition
Wempen, Faithe
2014-01-01
Computing Fundamentals has been tailor made to help you get up to speed on your Computing Basics and help you get proficient in entry level computing skills. Covering all the key topics, it starts at the beginning and takes you through basic set-up so that you'll be competent on a computer in no time.You'll cover: Computer Basics & HardwareSoftwareIntroduction to Windows 7Microsoft OfficeWord processing with Microsoft Word 2010Creating Spreadsheets with Microsoft ExcelCreating Presentation Graphics with PowerPointConnectivity and CommunicationWeb BasicsNetwork and Internet Privacy and Securit
Computer Literacy: Teaching Computer Ethics.
Troutner, Joanne
1986-01-01
Suggests learning activities for teaching computer ethics in three areas: (1) equal access; (2) computer crime; and (3) privacy. Topics include computer time, advertising, class enrollments, copyright law, sabotage ("worms"), the Privacy Act of 1974 and the Freedom of Information Act of 1966. (JM)
Computational Chemistry Comparison and Benchmark Database
SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access) The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.
Medical Image Computing and Computer-Assisted Intervention - MICCAI 2006
Nielsen, Mads; Sporring, Jon
The two-volume set LNCS 4190 and LNCS 4191 constitute the refereed proceedings of the 9th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2006, held in Copenhagen, Denmark in October 2006. The program committee carefully selected 39 revised full papers...
On Time with Minimal Expected Cost!
David, Alexandre; Jensen, Peter Gjøl; Larsen, Kim Guldstrand
2014-01-01
) timed game essentially defines an infinite-state Markov (reward) decision proces. In this setting the objective is classically to find a strategy that will minimize the expected reachability cost, but with no guarantees on worst-case behaviour. In this paper, we provide efficient methods for computing...... reachability strategies that will both ensure worst case time-bounds as well as provide (near-) minimal expected cost. Our method extends the synthesis algorithms of the synthesis tool Uppaal-Tiga with suitable adapted reinforcement learning techniques, that exhibits several orders of magnitude improvements w...
Levy, Azriel
2002-01-01
An advanced-level treatment of the basics of set theory, this text offers students a firm foundation, stopping just short of the areas employing model-theoretic methods. Geared toward upper-level undergraduate and graduate students, it consists of two parts: the first covers pure set theory, including the basic motions, order and well-foundedness, cardinal numbers, the ordinals, and the axiom of choice and some of it consequences; the second deals with applications and advanced topics such as point set topology, real spaces, Boolean algebras, and infinite combinatorics and large cardinals. An
Combinatorics of set partitions
Mansour, Toufik
2012-01-01
Focusing on a very active area of mathematical research in the last decade, Combinatorics of Set Partitions presents methods used in the combinatorics of pattern avoidance and pattern enumeration in set partitions. Designed for students and researchers in discrete mathematics, the book is a one-stop reference on the results and research activities of set partitions from 1500 A.D. to today. Each chapter gives historical perspectives and contrasts different approaches, including generating functions, kernel method, block decomposition method, generating tree, and Wilf equivalences. Methods and d
Milewski, Emil G
2012-01-01
REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Set Theory includes elementary logic, sets, relations, functions, denumerable and non-denumerable sets, cardinal numbers, Cantor's theorem, axiom of choice, and order relations.
Hyperimmunity and A-computable universal numberings
Issakhov, Assylbek
2016-08-01
Whether there exists a computable universal numbering for a computable family is the key question in theory of numberings. In a very general setting, this problem was explored in [Yu. L. Ershov, Theory of Numberings, Handbook of Computability Theory, North-Holland; Amsterdam: Stud. Log. Found. Math., Vol. 140, pp. 473-503, 1999]. For sets A that are Turing jumps of the empty set, the problem was treated in [S. A. Badaev, S. S. Goncharov, and A. Sorbi, Computability and Models, 11-44 (2003)] and other papers. In this work, we investigate families of total functions computable relative to hyperimmune and hyperimmune-free oracles.
Rough Sets, Their Extensions and Applications
Qiang Shen; Richard Jensen
2007-01-01
Rough set theory provides a useful mathematical foundation for developing automated computational systems that can help understand and make use of imperfect knowledge. Despite its recency, the theory and its extensions have been widely applied to many problems, including decision analysis, data mining, intelligent control and pattern recognition. This paper presents an outline of the basic concepts of rough sets and their major extensions, covering variable precision, tolerance and fuzzy rough sets. It also shows the diversity of successful applications these theories have entailed, ranging from financial and business, through biological and medicine, to physical, art, and meteorological.
Rough set models of Physarum machines
Pancerz, Krzysztof; Schumann, Andrew
2015-04-01
In this paper, we consider transition system models of behaviour of Physarum machines in terms of rough set theory. A Physarum machine, a biological computing device implemented in the plasmodium of Physarum polycephalum (true slime mould), is a natural transition system. In the behaviour of Physarum machines, one can notice some ambiguity in Physarum motions that influences exact anticipation of states of machines in time. To model this ambiguity, we propose to use rough set models created over transition systems. Rough sets are an appropriate tool to deal with rough (ambiguous, imprecise) concepts in the universe of discourse.
Computer programming and computer systems
Hassitt, Anthony
1966-01-01
Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten
Hougaard, Ole Ildsgaard
1994-01-01
This paper describes the implementation of a formal model for computability theory in the logical system HOL. The computability is modeled through an imperative language formally defined with the use of the Backus-Naur form and natural semantics. I will define the concepts of computable functions...... will then evolve in two directions: The first subject is the reduction of recursive sets, leading to the unsolvability of the halting problem. The other is two general results of computability theory: The s-m-n theorem and Kleene's version of the 2nd recursion theorem. The use of the HOL system implies...... that the theory must be proven with the absence of Church's thesis, and, in fact, all proofs have to be done in detail. The paper will show how the HOL system is used to define the modeling language, as well as demonstrating the interaction between the HOL system and the theory. At some points the HOL system...
Workshop on Computational Optimization
2016-01-01
This volume is a comprehensive collection of extended contributions from the Workshop on Computational Optimization 2014, held at Warsaw, Poland, September 7-10, 2014. The book presents recent advances in computational optimization. The volume includes important real problems like parameter settings for controlling processes in bioreactor and other processes, resource constrained project scheduling, infection distribution, molecule distance geometry, quantum computing, real-time management and optimal control, bin packing, medical image processing, localization the abrupt atmospheric contamination source and so on. It shows how to develop algorithms for them based on new metaheuristic methods like evolutionary computation, ant colony optimization, constrain programming and others. This research demonstrates how some real-world problems arising in engineering, economics, medicine and other domains can be formulated as optimization tasks.
A.A. Baranov
2008-01-01
Full Text Available The wide use of computer information media in education process and children and teenagers' leisure activities set new tasks for hygienists and physiologists in assessment of the influence of computer lessons on health, as well as in substantiation and development of ways of formatting and presenting materials from the viewpoint of legibility and regulation of teaching modes. In this work, data related to the study of response of students' optical system depending on the ways of presenting the information (on a screen or on a paper medium, assessment of functional status of children and teenagers after computer classes, are presented, analysis and assessment of modern electronic tutorials are given.Key words: children, reading, computers, electronic tutorials, health.
Ratomir Đ. Đokić
2013-02-01
Full Text Available Normal 0 false false false MicrosoftInternetExplorer4 Digital forensics is a set of scientific methods and procedures for collection, analysis and presentation of evidence that can be found on the computers, servers, computer networks, databases, mobile devices, as well as all other devices on which can store (save data. Digital forensics, computer networks is an examination of digital evidence that can be found on servers and user devices, which are exchanged internal or external communication through local or public networks. Also there is a need for identifying sites and modes of origin messages, establish user identification, and detection types of manipulation by logging in to your account. This paper presents the basic elements of computer networks, software used to communicate and describe the methods of collecting digital evidence and their analysis.
Pinski, Sebastian D
2011-01-01
Adiabatic Quantum Computing (AQC) is a relatively new subject in the world of quantum computing, let alone Physics. Inspiration for this project has come from recent controversy around D-Wave Systems in British Columbia, Canada, who claim to have built a working AQC which is now commercially available and hope to be distributing a 1024 qubit chip by the end of 2008. Their 16 qubit chip was demonstrated online for the Supercomputing 2007 conference within which a few small problems were solved; although the explanations that journalists and critics received were minimal and very little was divulged in the question and answer session. This 'unconvincing' demonstration has caused physicists and computer scientists to hit back at D-Wave. The aim of this project is to give an introduction to the historic advances in classical and quantum computing and to explore the methods of AQC. Through numerical simulations an algorithm for the Max Independent Set problem is empirically obtained.
Frontiers of higher order fuzzy sets
Tahayori, Hooman
2015-01-01
Frontiers of Higher Order Fuzzy Sets, strives to improve the theoretical aspects of general and Interval Type-2 fuzzy sets and provides a unified representation theorem for higher order fuzzy sets. Moreover, the book elaborates on the concept of gradual elements and their integration with the higher order fuzzy sets. This book also introduces new frameworks for information granulation based on general T2FSs, IT2FSs, Gradual elements, Shadowed sets and rough sets. In particular, the properties and characteristics of the new proposed frameworks are studied. Such new frameworks are shown to be more capable to be exploited in real applications. Higher order fuzzy sets that are the result of the integration of general T2FSs, IT2FSs, gradual elements, shadowed sets and rough sets will be shown to be suitable to be applied in the fields of bioinformatics, business, management, ambient intelligence, medicine, cloud computing and smart grids. Presents new variations of fuzzy set frameworks and new areas of applicabili...
Hardware Index to Set Partition Converter
2013-01-01
Boolean matching under permutation by efficient computation of canonical form. IEICE Trans. Fundamentals (12), 3134–3140 (2004) 6. Beeler, M., Gosper...Wesley ISBN: 0-321-58050-8 9. Kawano, S., Nakano, S.: Constant time generation of set partitions. IEICE Trans. Fundamentals E88-A(4), 930–934 (2005) 10
Polyomino Problems to Confuse Computers
Coffin, Stewart
2009-01-01
Computers are very good at solving certain types combinatorial problems, such as fitting sets of polyomino pieces into square or rectangular trays of a given size. However, most puzzle-solving programs now in use assume orthogonal arrangements. When one departs from the usual square grid layout, complications arise. The author--using a computer,…
U.S. Department of Health & Human Services — The Healthcare Effectiveness Data and Information Set (HEDIS) is a tool used by more than 90 percent of Americas health plans to measure performance on important...
National Oceanic and Atmospheric Administration, Department of Commerce — This set of slides graphically illustrates the potential danger that major earthquakes pose to school structures and to the children and adults who happen to be...
Lebesgue Sets Immeasurable Existence
Diana Marginean Petrovai
2012-12-01
Full Text Available It is well known that the notion of measure and integral were released early enough in close connection with practical problems of measuring of geometric ﬁgures. Notion of measure was outlined in the early 20th century through H. Lebesgue’s research, founder of the modern theory of measure and integral. It was developed concurrently a technique of integration of functions. Gradually it was formed a speciﬁc area todaycalled the measure and integral theory. Essential contributions to building this theory was made by a large number of mathematicians: C. Carathodory, J. Radon, O. Nikodym, S. Bochner, J. Pettis, P. Halmos and many others. In the following we present several abstract sets, classes of sets. There exists the sets which are not Lebesgue measurable and the sets which are Lebesgue measurable but are not Borel measurable. Hence B ⊂ L ⊂ P(X.
Computers and clinical arrhythmias.
Knoebel, S B; Lovelace, D E
1983-02-01
Cardiac arrhythmias are ubiquitous in normal and abnormal hearts. These disorders may be life-threatening or benign, symptomatic or unrecognized. Arrhythmias may be the precursor of sudden death, a cause or effect of cardiac failure, a clinical reflection of acute or chronic disorders, or a manifestation of extracardiac conditions. Progress is being made toward unraveling the diagnostic and therapeutic problems involved in arrhythmogenesis. Many of the advances would not be possible, however, without the availability of computer technology. To preserve the proper balance and purposeful progression of computer usage, engineers and physicians have been exhorted not to work independently in this field. Both should learn some of the other's trade. The two disciplines need to come together to solve important problems with computers in cardiology. The intent of this article was to acquaint the practicing cardiologist with some of the extant and envisioned computer applications and some of the problems with both. We conclude that computer-based database management systems are necessary for sorting out the clinical factors of relevance for arrhythmogenesis, but computer database management systems are beset with problems that will require sophisticated solutions. The technology for detecting arrhythmias on routine electrocardiograms is quite good but human over-reading is still required, and the rationale for computer application in this setting is questionable. Systems for qualitative, continuous monitoring and review of extended time ECG recordings are adequate with proper noise rejection algorithms and editing capabilities. The systems are limited presently for clinical application to the recognition of ectopic rhythms and significant pauses. Attention should now be turned to the clinical goals for detection and quantification of arrhythmias. We should be asking the following questions: How quantitative do systems need to be? Are computers required for the detection of
Würtz, Rolf P
2008-01-01
Organic Computing is a research field emerging around the conviction that problems of organization in complex systems in computer science, telecommunications, neurobiology, molecular biology, ethology, and possibly even sociology can be tackled scientifically in a unified way. From the computer science point of view, the apparent ease in which living systems solve computationally difficult problems makes it inevitable to adopt strategies observed in nature for creating information processing machinery. In this book, the major ideas behind Organic Computing are delineated, together with a sparse sample of computational projects undertaken in this new field. Biological metaphors include evolution, neural networks, gene-regulatory networks, networks of brain modules, hormone system, insect swarms, and ant colonies. Applications are as diverse as system design, optimization, artificial growth, task allocation, clustering, routing, face recognition, and sign language understanding.
Steane, A M
1998-01-01
The subject of quantum computing brings together ideas from classical information theory, computer science, and quantum physics. This review aims to summarise not just quantum computing, but the whole subject of quantum information theory. It turns out that information theory and quantum mechanics fit together very well. In order to explain their relationship, the review begins with an introduction to classical information theory and computer science, including Shannon's theorem, error correcting codes, Turing machines and computational complexity. The principles of quantum mechanics are then outlined, and the EPR experiment described. The EPR-Bell correlations, and quantum entanglement in general, form the essential new ingredient which distinguishes quantum from classical information theory, and, arguably, quantum from classical physics. Basic quantum information ideas are described, including key distribution, teleportation, data compression, quantum error correction, the universal quantum computer and qua...
Computational Design of Urban Layouts
Wonka, Peter
2015-10-07
A fundamental challenge in computational design is to compute layouts by arranging a set of shapes. In this talk I will present recent urban modeling projects with applications in computer graphics, urban planning, and architecture. The talk will look at different scales of urban modeling (streets, floorplans, parcels). A common challenge in all these modeling problems are functional and aesthetic constraints that should be respected. The talk also highlights interesting links to geometry processing problems, such as field design and quad meshing.
Multicriteria identification sets method
Kamenev, G. K.
2016-11-01
A multicriteria identification and prediction method for mathematical models of simulation type in the case of several identification criteria (error functions) is proposed. The necessity of the multicriteria formulation arises, for example, when one needs to take into account errors of completely different origins (not reducible to a single characteristic) or when there is no information on the class of noise in the data to be analyzed. An identification sets method is described based on the approximation and visualization of the multidimensional graph of the identification error function and sets of suboptimal parameters. This method allows for additional advantages of the multicriteria approach, namely, the construction and visual analysis of the frontier and the effective identification set (frontier and the Pareto set for identification criteria), various representations of the sets of Pareto effective and subeffective parameter combinations, and the corresponding predictive trajectory tubes. The approximation is based on the deep holes method, which yields metric ɛ-coverings with nearly optimal properties, and on multiphase approximation methods for the Edgeworth-Pareto hull. The visualization relies on the approach of interactive decision maps. With the use of the multicriteria method, multiple-choice solutions of identification and prediction problems can be produced and justified by analyzing the stability of the optimal solution not only with respect to the parameters (robustness with respect to data) but also with respect to the chosen set of identification criteria (robustness with respect to the given collection of functionals).
无
2007-01-01
Computer viruses are small software programs that are designed to spread from one computerto another and to interfere with computer operation.A virus might delete data on your computer,use your e-mail program to spread itself to othercomputers,or even erase everything on your hard disk.Viruses are most easily spread by attach-ments in e-mail messages or instant messaging messages.That is why it is essential that you never