Zimovets, Artem; Matviychuk, Alexander; Ushakov, Vladimir
2016-12-01
The paper presents two different approaches to reduce the time of computer calculation of reachability sets. First of these two approaches use different data structures for storing the reachability sets in the computer memory for calculation in single-threaded mode. Second approach is based on using parallel algorithms with reference to the data structures from the first approach. Within the framework of this paper parallel algorithm of approximate reachability set calculation on computer with SMP-architecture is proposed. The results of numerical modelling are presented in the form of tables which demonstrate high efficiency of parallel computing technology and also show how computing time depends on the used data structure.
Control mechanisms for stochastic biochemical systems via computation of reachable sets.
Lakatos, Eszter; Stumpf, Michael P H
2017-08-01
Controlling the behaviour of cells by rationally guiding molecular processes is an overarching aim of much of synthetic biology. Molecular processes, however, are notoriously noisy and frequently nonlinear. We present an approach to studying the impact of control measures on motifs of molecular interactions that addresses the problems faced in many biological systems: stochasticity, parameter uncertainty and nonlinearity. We show that our reachability analysis formalism can describe the potential behaviour of biological (naturally evolved as well as engineered) systems, and provides a set of bounds on their dynamics at the level of population statistics: for example, we can obtain the possible ranges of means and variances of mRNA and protein expression levels, even in the presence of uncertainty about model parameters.
Internal ellipsoidal estimates of reachable set of impulsive control systems
Energy Technology Data Exchange (ETDEWEB)
Matviychuk, Oksana G. [Institute of Mathematics and Mechanics, Russian Academy of Sciences, 16 S. Kovalevskaya str., Ekaterinburg, 620990, Russia and Ural Federal University, 19 Mira str., Ekaterinburg, 620002 (Russian Federation)
2014-11-18
A problem of estimating reachable sets of linear impulsive control system with uncertainty in initial data is considered. The impulsive controls in the dynamical system belong to the intersection of a special cone with a generalized ellipsoid both taken in the space of functions of bounded variation. Assume that an ellipsoidal state constraints are imposed. The algorithms for constructing internal ellipsoidal estimates of reachable sets for such control systems and numerical simulation results are given.
Computing and Visualizing Reachable Volumes for Maneuvering Satellites
International Nuclear Information System (INIS)
Jiang, M.; de Vries, W.H.; Pertica, A.J.; Olivier, S.S.
2011-01-01
Detecting and predicting maneuvering satellites is an important problem for Space Situational Awareness. The spatial envelope of all possible locations within reach of such a maneuvering satellite is known as the Reachable Volume (RV). As soon as custody of a satellite is lost, calculating the RV and its subsequent time evolution is a critical component in the rapid recovery of the satellite. In this paper, we present a Monte Carlo approach to computing the RV for a given object. Essentially, our approach samples all possible trajectories by randomizing thrust-vectors, thrust magnitudes and time of burn. At any given instance, the distribution of the 'point-cloud' of the virtual particles defines the RV. For short orbital time-scales, the temporal evolution of the point-cloud can result in complex, multi-reentrant manifolds. Visualization plays an important role in gaining insight and understanding into this complex and evolving manifold. In the second part of this paper, we focus on how to effectively visualize the large number of virtual trajectories and the computed RV. We present a real-time out-of-core rendering technique for visualizing the large number of virtual trajectories. We also examine different techniques for visualizing the computed volume of probability density distribution, including volume slicing, convex hull and isosurfacing. We compare and contrast these techniques in terms of computational cost and visualization effectiveness, and describe the main implementation issues encountered during our development process. Finally, we will present some of the results from our end-to-end system for computing and visualizing RVs using examples of maneuvering satellites.
Computing and Visualizing Reachable Volumes for Maneuvering Satellites
Jiang, M.; de Vries, W.; Pertica, A.; Olivier, S.
2011-09-01
Detecting and predicting maneuvering satellites is an important problem for Space Situational Awareness. The spatial envelope of all possible locations within reach of such a maneuvering satellite is known as the Reachable Volume (RV). As soon as custody of a satellite is lost, calculating the RV and its subsequent time evolution is a critical component in the rapid recovery of the satellite. In this paper, we present a Monte Carlo approach to computing the RV for a given object. Essentially, our approach samples all possible trajectories by randomizing thrust-vectors, thrust magnitudes and time of burn. At any given instance, the distribution of the "point-cloud" of the virtual particles defines the RV. For short orbital time-scales, the temporal evolution of the point-cloud can result in complex, multi-reentrant manifolds. Visualization plays an important role in gaining insight and understanding into this complex and evolving manifold. In the second part of this paper, we focus on how to effectively visualize the large number of virtual trajectories and the computed RV. We present a real-time out-of-core rendering technique for visualizing the large number of virtual trajectories. We also examine different techniques for visualizing the computed volume of probability density distribution, including volume slicing, convex hull and isosurfacing. We compare and contrast these techniques in terms of computational cost and visualization effectiveness, and describe the main implementation issues encountered during our development process. Finally, we will present some of the results from our end-to-end system for computing and visualizing RVs using examples of maneuvering satellites.
Exponential formula for the reachable sets of quantum stochastic differential inclusions
International Nuclear Information System (INIS)
Ayoola, E.O.
2001-07-01
We establish an exponential formula for the reachable sets of quantum stochastic differential inclusions (QSDI) which are locally Lipschitzian with convex values. Our main results partially rely on an auxiliary result concerning the density, in the topology of the locally convex space of solutions, of the set of trajectories whose matrix elements are continuously differentiable By applying the exponential formula, we obtain results concerning convergence of the discrete approximations of the reachable set of the QSDI. This extends similar results of Wolenski for classical differential inclusions to the present noncommutative quantum setting. (author)
Directory of Open Access Journals (Sweden)
Elise Cormie-Bowins
2012-10-01
Full Text Available We consider the problem of computing reachability probabilities: given a Markov chain, an initial state of the Markov chain, and a set of goal states of the Markov chain, what is the probability of reaching any of the goal states from the initial state? This problem can be reduced to solving a linear equation Ax = b for x, where A is a matrix and b is a vector. We consider two iterative methods to solve the linear equation: the Jacobi method and the biconjugate gradient stabilized (BiCGStab method. For both methods, a sequential and a parallel version have been implemented. The parallel versions have been implemented on the compute unified device architecture (CUDA so that they can be run on a NVIDIA graphics processing unit (GPU. From our experiments we conclude that as the size of the matrix increases, the CUDA implementations outperform the sequential implementations. Furthermore, the BiCGStab method performs better than the Jacobi method for dense matrices, whereas the Jacobi method does better for sparse ones. Since the reachability probabilities problem plays a key role in probabilistic model checking, we also compared the implementations for matrices obtained from a probabilistic model checker. Our experiments support the conjecture by Bosnacki et al. that the Jacobi method is superior to Krylov subspace methods, a class to which the BiCGStab method belongs, for probabilistic model checking.
Zhong, Zhixiong; Zhu, Yanzheng; Ahn, Choon Ki
2018-03-20
In this paper, we address the problem of reachable set estimation for continuous-time Takagi-Sugeno (T-S) fuzzy systems subject to unknown output delays. Based on the reachable set concept, a new controller design method is also discussed for such systems. An effective method is developed to attenuate the negative impact from the unknown output delays, which likely degrade the performance/stability of systems. First, an augmented fuzzy observer is proposed to capacitate a synchronous estimation for the system state and the disturbance term owing to the unknown output delays, which ensures that the reachable set of the estimation error is limited via the intersection operation of ellipsoids. Then, a compensation technique is employed to eliminate the influence on the system performance stemmed from the unknown output delays. Finally, the effectiveness and correctness of the obtained theories are verified by the tracking control of autonomous underwater vehicles. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Extending ALCQIO with reachability
Kotek, Tomer; Simkus, Mantas; Veith, Helmut; Zuleger, Florian
2014-01-01
We introduce a description logic ALCQIO_{b,Re} which adds reachability assertions to ALCQIO, a sub-logic of the two-variable fragment of first order logic with counting quantifiers. ALCQIO_{b,Re} is well-suited for applications in software verification and shape analysis. Shape analysis requires expressive logics which can express reachability and have good computational properties. We show that ALCQIO_{b,Re} can describe complex data structures with a high degree of sharing and allows compos...
Sampling-based motion planning with reachable volumes: Theoretical foundations
McMahon, Troy
2014-05-01
© 2014 IEEE. We introduce a new concept, reachable volumes, that denotes the set of points that the end effector of a chain or linkage can reach. We show that the reachable volume of a chain is equivalent to the Minkowski sum of the reachable volumes of its links, and give an efficient method for computing reachable volumes. We present a method for generating configurations using reachable volumes that is applicable to various types of robots including open and closed chain robots, tree-like robots, and complex robots including both loops and branches. We also describe how to apply constraints (both on end effectors and internal joints) using reachable volumes. Unlike previous methods, reachable volumes work for spherical and prismatic joints as well as planar joints. Visualizations of reachable volumes can allow an operator to see what positions the robot can reach and can guide robot design. We present visualizations of reachable volumes for representative robots including closed chains and graspers as well as for examples with joint and end effector constraints.
Sampling-based motion planning with reachable volumes: Theoretical foundations
McMahon, Troy; Thomas, Shawna; Amato, Nancy M.
2014-01-01
© 2014 IEEE. We introduce a new concept, reachable volumes, that denotes the set of points that the end effector of a chain or linkage can reach. We show that the reachable volume of a chain is equivalent to the Minkowski sum of the reachable volumes of its links, and give an efficient method for computing reachable volumes. We present a method for generating configurations using reachable volumes that is applicable to various types of robots including open and closed chain robots, tree-like robots, and complex robots including both loops and branches. We also describe how to apply constraints (both on end effectors and internal joints) using reachable volumes. Unlike previous methods, reachable volumes work for spherical and prismatic joints as well as planar joints. Visualizations of reachable volumes can allow an operator to see what positions the robot can reach and can guide robot design. We present visualizations of reachable volumes for representative robots including closed chains and graspers as well as for examples with joint and end effector constraints.
Directory of Open Access Journals (Sweden)
Arnaud Gotlieb
2013-02-01
Full Text Available Iterative imperative programs can be considered as infinite-state systems computing over possibly unbounded domains. Studying reachability in these systems is challenging as it requires to deal with an infinite number of states with standard backward or forward exploration strategies. An approach that we call Constraint-based reachability, is proposed to address reachability problems by exploring program states using a constraint model of the whole program. The keypoint of the approach is to interpret imperative constructions such as conditionals, loops, array and memory manipulations with the fundamental notion of constraint over a computational domain. By combining constraint filtering and abstraction techniques, Constraint-based reachability is able to solve reachability problems which are usually outside the scope of backward or forward exploration strategies. This paper proposes an interpretation of classical filtering consistencies used in Constraint Programming as abstract domain computations, and shows how this approach can be used to produce a constraint solver that efficiently generates solutions for reachability problems that are unsolvable by other approaches.
Deducing the reachable space from fingertip positions.
Hai-Trieu Pham; Pathirana, Pubudu N
2015-01-01
The reachable space of the hand has received significant interests in the past from relevant medical researchers and health professionals. The reachable space was often computed from the joint angles acquired from a motion capture system such as gloves or markers attached to each bone of the finger. However, the contact between the hand and device can cause difficulties particularly for hand with injuries, burns or experiencing certain dermatological conditions. This paper introduces an approach to find the reachable space of the hand in a non-contact measurement form utilizing the Leap Motion Controller. The approach is based on the analysis of each position in the motion path of the fingertip acquired by the Leap Motion Controller. For each position of the fingertip, the inverse kinematic problem was solved under the physiological multiple constraints of the human hand to find a set of all possible configurations of three finger joints. Subsequently, all the sets are unified to form a set of all possible configurations specific for that motion. Finally, a reachable space is computed from the configuration corresponding to the complete extension and the complete flexion of the finger joint angles in this set.
Planning with Reachable Distances
Tang, Xinyu; Thomas, Shawna; Amato, Nancy M.
2009-01-01
reachable distance space (RD-space), in which all configurations lie in the set of constraint-satisfying subspaces. This enables us to directly sample the constrained subspaces with complexity linear in the robot's number of degrees of freedom. In addition
Planning with Reachable Distances
Tang, Xinyu
2009-01-01
Motion planning for spatially constrained robots is difficult due to additional constraints placed on the robot, such as closure constraints for closed chains or requirements on end effector placement for articulated linkages. It is usually computationally too expensive to apply sampling-based planners to these problems since it is difficult to generate valid configurations. We overcome this challenge by redefining the robot\\'s degrees of freedom and constraints into a new set of parameters, called reachable distance space (RD-space), in which all configurations lie in the set of constraint-satisfying subspaces. This enables us to directly sample the constrained subspaces with complexity linear in the robot\\'s number of degrees of freedom. In addition to supporting efficient sampling, we show that the RD-space formulation naturally supports planning, and in particular, we design a local planner suitable for use by sampling-based planners. We demonstrate the effectiveness and efficiency of our approach for several systems including closed chain planning with multiple loops, restricted end effector sampling, and on-line planning for drawing/sculpting. We can sample single-loop closed chain systems with 1000 links in time comparable to open chain sampling, and we can generate samples for 1000-link multi-loop systems of varying topology in less than a second. © 2009 Springer-Verlag.
Reachable Sets of Hidden CPS Sensor Attacks : Analysis and Synthesis Tools
Murguia, Carlos; van de Wouw, N.; Ruths, Justin; Dochain, Denis; Henrion, Didier; Peaucelle, Dimitri
2017-01-01
For given system dynamics, control structure, and fault/attack detection procedure, we provide mathematical tools–in terms of Linear Matrix Inequalities (LMIs)–for characterizing and minimizing the set of states that sensor attacks can induce in the system while keeping the alarm rate of the
McMahon, Troy
2015-05-01
© 2015 IEEE. Reachable volumes are a new technique that allows one to efficiently restrict sampling to feasible/reachable regions of the planning space even for high degree of freedom and highly constrained problems. However, they have so far only been applied to graph-based sampling-based planners. In this paper we develop the methodology to apply reachable volumes to tree-based planners such as Rapidly-Exploring Random Trees (RRTs). In particular, we propose a reachable volume RRT called RVRRT that can solve high degree of freedom problems and problems with constraints. To do so, we develop a reachable volume stepping function, a reachable volume expand function, and a distance metric based on these operations. We also present a reachable volume local planner to ensure that local paths satisfy constraints for methods such as PRMs. We show experimentally that RVRRTs can solve constrained problems with as many as 64 degrees of freedom and unconstrained problems with as many as 134 degrees of freedom. RVRRTs can solve problems more efficiently than existing methods, requiring fewer nodes and collision detection calls. We also show that it is capable of solving difficult problems that existing methods cannot.
McMahon, Troy; Thomas, Shawna; Amato, Nancy M.
2015-01-01
© 2015 IEEE. Reachable volumes are a new technique that allows one to efficiently restrict sampling to feasible/reachable regions of the planning space even for high degree of freedom and highly constrained problems. However, they have so far only been applied to graph-based sampling-based planners. In this paper we develop the methodology to apply reachable volumes to tree-based planners such as Rapidly-Exploring Random Trees (RRTs). In particular, we propose a reachable volume RRT called RVRRT that can solve high degree of freedom problems and problems with constraints. To do so, we develop a reachable volume stepping function, a reachable volume expand function, and a distance metric based on these operations. We also present a reachable volume local planner to ensure that local paths satisfy constraints for methods such as PRMs. We show experimentally that RVRRTs can solve constrained problems with as many as 64 degrees of freedom and unconstrained problems with as many as 134 degrees of freedom. RVRRTs can solve problems more efficiently than existing methods, requiring fewer nodes and collision detection calls. We also show that it is capable of solving difficult problems that existing methods cannot.
Distributed Algorithms for Time Optimal Reachability Analysis
DEFF Research Database (Denmark)
Zhang, Zhengkui; Nielsen, Brian; Larsen, Kim Guldstrand
2016-01-01
. We propose distributed computing to accelerate time optimal reachability analysis. We develop five distributed state exploration algorithms, implement them in \\uppaal enabling it to exploit the compute resources of a dedicated model-checking cluster. We experimentally evaluate the implemented...... algorithms with four models in terms of their ability to compute near- or proven-optimal solutions, their scalability, time and memory consumption and communication overhead. Our results show that distributed algorithms work much faster than sequential algorithms and have good speedup in general.......Time optimal reachability analysis is a novel model based technique for solving scheduling and planning problems. After modeling them as reachability problems using timed automata, a real-time model checker can compute the fastest trace to the goal states which constitutes a time optimal schedule...
Large scale analysis of signal reachability.
Todor, Andrei; Gabr, Haitham; Dobra, Alin; Kahveci, Tamer
2014-06-15
Major disorders, such as leukemia, have been shown to alter the transcription of genes. Understanding how gene regulation is affected by such aberrations is of utmost importance. One promising strategy toward this objective is to compute whether signals can reach to the transcription factors through the transcription regulatory network (TRN). Due to the uncertainty of the regulatory interactions, this is a #P-complete problem and thus solving it for very large TRNs remains to be a challenge. We develop a novel and scalable method to compute the probability that a signal originating at any given set of source genes can arrive at any given set of target genes (i.e., transcription factors) when the topology of the underlying signaling network is uncertain. Our method tackles this problem for large networks while providing a provably accurate result. Our method follows a divide-and-conquer strategy. We break down the given network into a sequence of non-overlapping subnetworks such that reachability can be computed autonomously and sequentially on each subnetwork. We represent each interaction using a small polynomial. The product of these polynomials express different scenarios when a signal can or cannot reach to target genes from the source genes. We introduce polynomial collapsing operators for each subnetwork. These operators reduce the size of the resulting polynomial and thus the computational complexity dramatically. We show that our method scales to entire human regulatory networks in only seconds, while the existing methods fail beyond a few tens of genes and interactions. We demonstrate that our method can successfully characterize key reachability characteristics of the entire transcriptions regulatory networks of patients affected by eight different subtypes of leukemia, as well as those from healthy control samples. All the datasets and code used in this article are available at bioinformatics.cise.ufl.edu/PReach/scalable.htm. © The Author 2014
Reachability for Finite-state Process Algebras Using Horn Clauses
DEFF Research Database (Denmark)
Skrypnyuk, Nataliya; Nielson, Flemming
2013-01-01
of the Data Flow Analysis are used in order to build a set of Horn clauses whose least model corresponds to an overapproximation of the reachable states. The computed model can be refined after each transition, and the algorithm runs until either a state whose reachability should be checked is encountered...... or it is not in the least model for all constructed states and thus is definitely unreachable. The advantages of the algorithm are that in many cases only a part of the Labelled Transition System will be built which leads to lower time and memory consumption. Also, it is not necessary to save all the encountered states...... which leads to further reduction of the memory requirements of the algorithm....
Reachability Analysis in Probabilistic Biological Networks.
Gabr, Haitham; Todor, Andrei; Dobra, Alin; Kahveci, Tamer
2015-01-01
Extra-cellular molecules trigger a response inside the cell by initiating a signal at special membrane receptors (i.e., sources), which is then transmitted to reporters (i.e., targets) through various chains of interactions among proteins. Understanding whether such a signal can reach from membrane receptors to reporters is essential in studying the cell response to extra-cellular events. This problem is drastically complicated due to the unreliability of the interaction data. In this paper, we develop a novel method, called PReach (Probabilistic Reachability), that precisely computes the probability that a signal can reach from a given collection of receptors to a given collection of reporters when the underlying signaling network is uncertain. This is a very difficult computational problem with no known polynomial-time solution. PReach represents each uncertain interaction as a bi-variate polynomial. It transforms the reachability problem to a polynomial multiplication problem. We introduce novel polynomial collapsing operators that associate polynomial terms with possible paths between sources and targets as well as the cuts that separate sources from targets. These operators significantly shrink the number of polynomial terms and thus the running time. PReach has much better time complexity than the recent solutions for this problem. Our experimental results on real data sets demonstrate that this improvement leads to orders of magnitude of reduction in the running time over the most recent methods. Availability: All the data sets used, the software implemented and the alignments found in this paper are available at http://bioinformatics.cise.ufl.edu/PReach/.
Reachability problems in scheduling and planning
Eggermont, C.E.J.
2012-01-01
Reachability problems are fundamental in the context of many mathematical models and abstractions which describe various computational processes. Intuitively, when many objects move within a shared environment, objects may have to wait for others before moving and so slow down, or objects may even
Bidirectional reachability-based modules
CSIR Research Space (South Africa)
Nortje, R
2011-07-01
Full Text Available The authors introduce an algorithm for MinA extraction in EL based on bidirectional reachability. They obtain a significant reduction in the size of modules extracted at almost no additional cost to that of extracting standard reachability...
The power of reachability testing for timed automata
DEFF Research Database (Denmark)
Aceto, Luca; Bouyer, Patricia; Burgueno, A.
2003-01-01
The computational engine of the verification tool UPPAAL consists of a collection of efficient algorithms for the analysis of reachability properties of systems. Model-checking of properties other than plain reachability ones may currently be carried out in such a tool as follows. Given a propert...... be reached. Finally, the property language characterizing the power of reachability testing is used to provide a definition of characteristic properties with respect to a timed version of the ready simulation preorder, for nodes of tau-free, deterministic timed automata....
Time Optimal Reachability Analysis Using Swarm Verification
DEFF Research Database (Denmark)
Zhang, Zhengkui; Nielsen, Brian; Larsen, Kim Guldstrand
2016-01-01
Time optimal reachability analysis employs model-checking to compute goal states that can be reached from an initial state with a minimal accumulated time duration. The model-checker may produce a corresponding diagnostic trace which can be interpreted as a feasible schedule for many scheduling...... and planning problems, response time optimization etc. We propose swarm verification to accelerate time optimal reachability using the real-time model-checker Uppaal. In swarm verification, a large number of model checker instances execute in parallel on a computer cluster using different, typically randomized...... search strategies. We develop four swarm algorithms and evaluate them with four models in terms scalability, and time- and memory consumption. Three of these cooperate by exchanging costs of intermediate solutions to prune the search using a branch-and-bound approach. Our results show that swarm...
Iterable Forward Reachability Analysis of Monitor-DPNs
Directory of Open Access Journals (Sweden)
Benedikt Nordhoff
2013-09-01
Full Text Available There is a close connection between data-flow analysis and model checking as observed and studied in the nineties by Steffen and Schmidt. This indicates that automata-based analysis techniques developed in the realm of infinite-state model checking can be applied as data-flow analyzers that interpret complex control structures, which motivates the development of such analysis techniques for ever more complex models. One approach proposed by Esparza and Knoop is based on computation of predecessor or successor sets for sets of automata configurations. Our goal is to adapt and exploit this approach for analysis of multi-threaded Java programs. Specifically, we consider the model of Monitor-DPNs for concurrent programs. Monitor-DPNs precisely model unbounded recursion, dynamic thread creation, and synchronization via well-nested locks with finite abstractions of procedure- and thread-local state. Previous work on this model showed how to compute regular predecessor sets of regular configurations and tree-regular successor sets of a fixed initial configuration. By combining and extending different previously developed techniques we show how to compute tree-regular successor sets of tree-regular sets. Thereby we obtain an iterable, lock-sensitive forward reachability analysis. We implemented the analysis for Java programs and applied it to information flow control and data race detection.
Bisimulation, Logic and Reachability Analysis for Markovian Systems
Bujorianu, L.M.; Bujorianu, M.C.
2008-01-01
In the recent years, there have been a large amount of investigations on safety verification of uncertain continuous systems. In engineering and applied mathematics, this verification is called stochastic reachability analysis, while in computer science this is called probabilistic model checking
Reachable Distance Space: Efficient Sampling-Based Planning for Spatially Constrained Systems
Xinyu Tang,; Thomas, S.; Coleman, P.; Amato, N. M.
2010-01-01
reachable distance space (RD-space), in which all configurations lie in the set of constraint-satisfying subspaces. This enables us to directly sample the constrained subspaces with complexity linear in the number of the robot's degrees of freedom
McMahon, Troy
2014-09-01
© 2014 IEEE. Reachable volumes are a geometric representation of the regions the joints of a robot can reach. They can be used to generate constraint satisfying samples for problems including complicated linkage robots (e.g. closed chains and graspers). They can also be used to assist robot operators and to help in robot design.We show that reachable volumes have an O(1) complexity in unconstrained problems as well as in many constrained problems. We also show that reachable volumes can be computed in linear time and that reachable volume samples can be generated in linear time in problems without constraints. We experimentally validate reachable volume sampling, both with and without constraints on end effectors and/or internal joints. We show that reachable volume samples are less likely to be invalid due to self-collisions, making reachable volume sampling significantly more efficient for higher dimensional problems. We also show that these samples are easier to connect than others, resulting in better connected roadmaps. We demonstrate that our method can be applied to 262-dof, multi-loop, and tree-like linkages including combinations of planar, prismatic and spherical joints. In contrast, existing methods either cannot be used for these problems or do not produce good quality solutions.
McMahon, Troy; Thomas, Shawna; Amato, Nancy M.
2014-01-01
© 2014 IEEE. Reachable volumes are a geometric representation of the regions the joints of a robot can reach. They can be used to generate constraint satisfying samples for problems including complicated linkage robots (e.g. closed chains and graspers). They can also be used to assist robot operators and to help in robot design.We show that reachable volumes have an O(1) complexity in unconstrained problems as well as in many constrained problems. We also show that reachable volumes can be computed in linear time and that reachable volume samples can be generated in linear time in problems without constraints. We experimentally validate reachable volume sampling, both with and without constraints on end effectors and/or internal joints. We show that reachable volume samples are less likely to be invalid due to self-collisions, making reachable volume sampling significantly more efficient for higher dimensional problems. We also show that these samples are easier to connect than others, resulting in better connected roadmaps. We demonstrate that our method can be applied to 262-dof, multi-loop, and tree-like linkages including combinations of planar, prismatic and spherical joints. In contrast, existing methods either cannot be used for these problems or do not produce good quality solutions.
Decomposing Huge Networks into Skeleton Graphs by Reachable Relations
2017-06-07
efficiently process approximate queries, i.e., reachable nodes , on the original dataset, i.e., the given network. Finally, by focusing on spatial networks...centralized control (e.g., the Arab Spring). These problems have mostly been studied from the view point of identifying influential nodes under some...where we set k = 26 for calculation of the bottom-k sketches of all the nodes . Figure 1(a) compares the actual processing times of these methods
Stochastic Reachability Analysis of Hybrid Systems
Bujorianu, Luminita Manuela
2012-01-01
Stochastic reachability analysis (SRA) is a method of analyzing the behavior of control systems which mix discrete and continuous dynamics. For probabilistic discrete systems it has been shown to be a practical verification method but for stochastic hybrid systems it can be rather more. As a verification technique SRA can assess the safety and performance of, for example, autonomous systems, robot and aircraft path planning and multi-agent coordination but it can also be used for the adaptive control of such systems. Stochastic Reachability Analysis of Hybrid Systems is a self-contained and accessible introduction to this novel topic in the analysis and development of stochastic hybrid systems. Beginning with the relevant aspects of Markov models and introducing stochastic hybrid systems, the book then moves on to coverage of reachability analysis for stochastic hybrid systems. Following this build up, the core of the text first formally defines the concept of reachability in the stochastic framework and then...
Reachability modules for the description logic SRIQ
CSIR Research Space (South Africa)
Nortje, R
2013-12-01
Full Text Available In this paper we investigate module extraction for the Description Logic SRIQ. We formulate modules in terms of the reachability problem for directed hypergraphs. Using inseperability relations, we investigate the module-theoretic properties...
Almost computably enumerable families of sets
International Nuclear Information System (INIS)
Kalimullin, I Sh
2008-01-01
An almost computably enumerable family that is not Φ'-computably enumerable is constructed. Moreover, it is established that for any computably enumerable (c.e.) set A there exists a family that is X-c.e. if and only if the set X is not A-computable. Bibliography: 5 titles.
Edwards, Alistair
2006-01-01
This book is aimed at students who are thinking of studying Computer Science or a related topic at university. Part One is a brief introduction to the topics that make up Computer Science, some of which you would expect to find as course modules in a Computer Science programme. These descriptions should help you to tell the difference between Computer Science as taught in different departments and so help you to choose a course that best suits you. Part Two builds on what you have learned about the nature of Computer Science by giving you guidance in choosing universities and making your appli
Computer Language Settings and Canadian Spellings
Shuttleworth, Roger
2011-01-01
The language settings used on personal computers interact with the spell-checker in Microsoft Word, which directly affects the flagging of spellings that are deemed incorrect. This study examined the language settings of personal computers owned by a group of Canadian university students. Of 21 computers examined, only eight had their Windows…
Reachability analysis of real-time systems using time Petri nets.
Wang, J; Deng, Y; Xu, G
2000-01-01
Time Petri nets (TPNs) are a popular Petri net model for specification and verification of real-time systems. A fundamental and most widely applied method for analyzing Petri nets is reachability analysis. The existing technique for reachability analysis of TPNs, however, is not suitable for timing property verification because one cannot derive end-to-end delay in task execution, an important issue for time-critical systems, from the reachability tree constructed using the technique. In this paper, we present a new reachability based analysis technique for TPNs for timing property analysis and verification that effectively addresses the problem. Our technique is based on a concept called clock-stamped state class (CS-class). With the reachability tree generated based on CS-classes, we can directly compute the end-to-end time delay in task execution. Moreover, a CS-class can be uniquely mapped to a traditional state class based on which the conventional reachability tree is constructed. Therefore, our CS-class-based analysis technique is more general than the existing technique. We show how to apply this technique to timing property verification of the TPN model of a command and control (C2) system.
Computability and Representations of the Zero Set
P.J. Collins (Pieter)
2008-01-01
htmlabstractIn this note we give a new representation for closed sets under which the robust zero set of a function is computable. We call this representation the component cover representation. The computation of the zero set is based on topological index theory, the most powerful tool for finding
International Nuclear Information System (INIS)
Gallego V, Luis Eduardo; Montana Ch, Johny Hernan; Tovar P, Andres Fernando; Amortegui, Francisco
2000-01-01
The program GMT allows the analysis of setting to earth for tensions DC and AC (of low frequency) of diverse configurations composed by cylindrical electrodes interconnected, in a homogeneous land or stratified (two layers). This analysis understands among other aspects: calculation of the setting resistance to earth, elevation of potential of the system (GPR), calculation of current densities in the conductors, potentials calculation in which point on the land surface (profile and surfaces), tensions calculation in passing and of contact, also, it carries out the interpretation of resistivity measures for Wenner and Schlumberger methods, finding a model of two layers
Reachability for Finite-State Process Algebras Using Static Analysis
DEFF Research Database (Denmark)
Skrypnyuk, Nataliya; Nielson, Flemming
2011-01-01
of the Data Flow Analysis are used in order to “cut off” some of the branches in the reachability analysis that are not important for determining, whether or not a state is reachable. In this way, it is possible for our reachability algorithm to avoid building large parts of the system altogether and still......In this work we present an algorithm for solving the reachability problem in finite systems that are modelled with process algebras. Our method uses Static Analysis, in particular, Data Flow Analysis, of the syntax of a process algebraic system with multi-way synchronisation. The results...... solve the reachability problem in a precise way....
Answer Set Programming and Other Computing Paradigms
Meng, Yunsong
2013-01-01
Answer Set Programming (ASP) is one of the most prominent and successful knowledge representation paradigms. The success of ASP is due to its expressive non-monotonic modeling language and its efficient computational methods originating from building propositional satisfiability solvers. The wide adoption of ASP has motivated several extensions to…
Reachability by paths of bounded curvature in a convex polygon
Ahn, Heekap; Cheong, Otfried; Matoušek, Jiřǐ; Vigneron, Antoine E.
2012-01-01
Let B be a point robot moving in the plane, whose path is constrained to forward motions with curvature at most 1, and let P be a convex polygon with n vertices. Given a starting configuration (a location and a direction of travel) for B inside P, we characterize the region of all points of P that can be reached by B, and show that it has complexity O(n). We give an O(n2) time algorithm to compute this region. We show that a point is reachable only if it can be reached by a path of type CCSCS, where C denotes a unit circle arc and S denotes a line segment. © 2011 Elsevier B.V.
Reachability cuts for the vehicle routing problem with time windows
DEFF Research Database (Denmark)
Lysgaard, Jens
2004-01-01
This paper introduces a class of cuts, called reachability cuts, for the Vehicle Routing Problem with Time Windows (VRPTW). Reachability cuts are closely related to cuts derived from precedence constraints in the Asymmetric Traveling Salesman Problem with Time Windows and to k-path cuts...
Functional range of movement of the hand: declination angles to reachable space.
Pham, Hai Trieu; Pathirana, Pubudu N; Caelli, Terry
2014-01-01
The measurement of the range of hand joint movement is an essential part of clinical practice and rehabilitation. Current methods use three finger joint declination angles of the metacarpophalangeal, proximal interphalangeal and distal interphalangeal joints. In this paper we propose an alternate form of measurement for the finger movement. Using the notion of reachable space instead of declination angles has significant advantages. Firstly, it provides a visual and quantifiable method that therapists, insurance companies and patients can easily use to understand the functional capabilities of the hand. Secondly, it eliminates the redundant declination angle constraints. Finally, reachable space, defined by a set of reachable fingertip positions, can be measured and constructed by using a modern camera such as Creative Senz3D or built-in hand gesture sensors such as the Leap Motion Controller. Use of cameras or optical-type sensors for this purpose have considerable benefits such as eliminating and minimal involvement of therapist errors, non-contact measurement in addition to valuable time saving for the clinician. A comparison between using declination angles and reachable space were made based on Hume's experiment on functional range of movement to prove the efficiency of this new approach.
Probabilistic Reachability for Parametric Markov Models
DEFF Research Database (Denmark)
Hahn, Ernst Moritz; Hermanns, Holger; Zhang, Lijun
2011-01-01
Given a parametric Markov model, we consider the problem of computing the rational function expressing the probability of reaching a given set of states. To attack this principal problem, Daws has suggested to first convert the Markov chain into a finite automaton, from which a regular expression...
Reachable Distance Space: Efficient Sampling-Based Planning for Spatially Constrained Systems
Xinyu Tang,
2010-01-25
Motion planning for spatially constrained robots is difficult due to additional constraints placed on the robot, such as closure constraints for closed chains or requirements on end-effector placement for articulated linkages. It is usually computationally too expensive to apply sampling-based planners to these problems since it is difficult to generate valid configurations. We overcome this challenge by redefining the robot\\'s degrees of freedom and constraints into a new set of parameters, called reachable distance space (RD-space), in which all configurations lie in the set of constraint-satisfying subspaces. This enables us to directly sample the constrained subspaces with complexity linear in the number of the robot\\'s degrees of freedom. In addition to supporting efficient sampling of configurations, we show that the RD-space formulation naturally supports planning and, in particular, we design a local planner suitable for use by sampling-based planners. We demonstrate the effectiveness and efficiency of our approach for several systems including closed chain planning with multiple loops, restricted end-effector sampling, and on-line planning for drawing/sculpting. We can sample single-loop closed chain systems with 1,000 links in time comparable to open chain sampling, and we can generate samples for 1,000-link multi-loop systems of varying topologies in less than a second. © 2010 The Author(s).
On the reachability and observability of path and cycle graphs
Parlangeli, Gianfranco; Notarstefano, Giuseppe
2011-01-01
In this paper we investigate the reachability and observability properties of a network system, running a Laplacian based average consensus algorithm, when the communication graph is a path or a cycle. More in detail, we provide necessary and sufficient conditions, based on simple algebraic rules from number theory, to characterize all and only the nodes from which the network system is reachable (respectively observable). Interesting immediate corollaries of our results are: (i) a path graph...
TAPAAL and Reachability Analysis of P/T Nets
DEFF Research Database (Denmark)
Jensen, Jonas Finnemann; Nielsen, Thomas Søndersø; Østergaard, Lars Kærlund
2016-01-01
We discuss selected model checking techniques used in the tool TAPAAL for the reachability analysis of weighted Petri nets with inhibitor arcs. We focus on techniques that had the most significant effect at the 2015 Model Checking Contest (MCC). While the techniques are mostly well known, our...... contribution lies in their adaptation to the MCC reachability queries, their efficient implementation and the evaluation of their performance on a large variety of nets from MCC'15....
International Nuclear Information System (INIS)
Lemattre, Thibault
2013-01-01
The design of operational control architectures is a very important step of the design of energy production systems. This step consists in mapping the functional architecture of the system onto its hardware architecture while respecting capacity and safety constraints, i.e. in allocating control functions to a set of controllers while respecting these constraints. The work presented in this thesis presents: i) a formalization of the data and constraints of the function allocation problem; ii) a mapping method, by reachability analysis, based on a request/response mechanism in a network of communicating automata with integer variables; iii) a comparison between this method and a resolution method by integer linear programming. The results of this work have been validated on examples of actual size and open the way to the coupling between reachability analysis and integer linear programming for the resolution of satisfaction problems for non-linear constraint systems. (author)
Causal Set Generator and Action Computer
Cunningham, William; Krioukov, Dmitri
2017-01-01
The causal set approach to quantum gravity has gained traction over the past three decades, but numerical experiments involving causal sets have been limited to relatively small scales. The software suite presented here provides a new framework for the generation and study of causal sets. Its efficiency surpasses previous implementations by several orders of magnitude. We highlight several important features of the code, including the compact data structures, the $O(N^2)$ causal set generatio...
Reachable Sets for Multiple Asteroid Sample Return Missions
2005-12-01
Vtdot Massdot Sc al ed U ni ts Minimum Derivative Values Maximum Derivative Values Figure 15 Actual Range of State Derivates When these plots are...0.0563 -0.0016 Vtdot -0.0065 -0.0579 0.0119 0.0070 Massdot -0.0148 -0.0148 -0.0075 -0.0075 Control low bound low guess upper guess upper
Minimum-Cost Reachability for Priced Timed Automata
DEFF Research Database (Denmark)
Behrmann, Gerd; Fehnker, Ansgar; Hune, Thomas Seidelin
2001-01-01
This paper introduces the model of linearly priced timed automata as an extension of timed automata, with prices on both transitions and locations. For this model we consider the minimum-cost reachability problem: i.e. given a linearly priced timed automaton and a target state, determine...... the minimum cost of executions from the initial state to the target state. This problem generalizes the minimum-time reachability problem for ordinary timed automata. We prove decidability of this problem by offering an algorithmic solution, which is based on a combination of branch-and-bound techniques...... and a new notion of priced regions. The latter allows symbolic representation and manipulation of reachable states together with the cost of reaching them....
Minimum-Cost Reachability for Priced Timed Automata
DEFF Research Database (Denmark)
Behrmann, Gerd; Fehnker, Ansgar; Hune, Thomas Seidelin
2001-01-01
This paper introduces the model of linearly priced timed automata as an extension of timed automata, with prices on both transitions and locations. For this model we consider the minimum-cost reachability problem: i.e. given a linearly priced timed automaton and a target state, determine...... the minimum cost of executions from the initial state to the target state. This problem generalizes the minimum-time reachability problem for ordinary timed automata. We prove decidability of this problem by offering an algorithmic solution, which is based on a combination of branch-and-bound techniques...
Mobility Tolerant Firework Routing for Improving Reachability in MANETs
Directory of Open Access Journals (Sweden)
Gen Motoyoshi
2014-03-01
Full Text Available In this paper, we investigate our mobility-assisted and adaptive broadcast routing mechanism, called Mobility Tolerant Firework Routing (MTFR, which utilizes the concept of potentials for routing and improves node reachability, especially in situations with high mobility, by including a broadcast mechanism. We perform detailed evaluations by simulations in a mobile environment and demonstrate the advantages of MTFR over conventional potential-based routing. In particular, we show that MTFR produces better reachability in many aspects at the expense of a small additional transmission delay and intermediate traffic overhead, making MTFR a promising routing protocol and feasible for future mobile Internet infrastructures.
Computations of Quasiconvex Hulls of Isotropic Sets
Czech Academy of Sciences Publication Activity Database
Heinz, S.; Kružík, Martin
2017-01-01
Roč. 24, č. 2 (2017), s. 477-492 ISSN 0944-6532 R&D Projects: GA ČR GA14-15264S; GA ČR(CZ) GAP201/12/0671 Institutional support: RVO:67985556 Keywords : quasiconvexity * isotropic compact sets * matrices Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.496, year: 2016 http://library.utia.cas.cz/separaty/2017/MTR/kruzik-0474874.pdf
Set operads in combinatorics and computer science
Méndez, Miguel A
2015-01-01
This monograph has two main objectives. The first one is to give a self-contained exposition of the relevant facts about set operads, in the context of combinatorial species and its operations. This approach has various advantages: one of them is that the definition of combinatorial operations on species, product, sum, substitution and derivative, are simple and natural. They were designed as the set theoretical counterparts of the homonym operations on exponential generating functions, giving an immediate insight on the combinatorial meaning of them. The second objective is more ambitious. Before formulating it, authors present a brief historic account on the sources of decomposition theory. For more than forty years decompositions of discrete structures have been studied in different branches of discrete mathematics: combinatorial optimization, network and graph theory, switching design or boolean functions, simple multi-person games and clutters, etc.
Defining Effectiveness Using Finite Sets A Study on Computability
DEFF Research Database (Denmark)
Macedo, Hugo Daniel dos Santos; Haeusler, Edward H.; Garcia, Alex
2016-01-01
finite sets and uses category theory as its mathematical foundations. The model relies on the fact that every function between finite sets is computable, and that the finite composition of such functions is also computable. Our approach is an alternative to the traditional model-theoretical based works...... which rely on (ZFC) set theory as a mathematical foundation, and our approach is also novel when compared to the already existing works using category theory to approach computability results. Moreover, we show how to encode Turing machine computations in the model, thus concluding the model expresses...
Directory of Open Access Journals (Sweden)
L. Brim
2011-09-01
Full Text Available In this paper, a novel computational technique for finite discrete approximation of continuous dynamical systems suitable for a significant class of biochemical dynamical systems is introduced. The method is parameterized in order to affect the imposed level of approximation provided that with increasing parameter value the approximation converges to the original continuous system. By employing this approximation technique, we present algorithms solving the reachability problem for biochemical dynamical systems. The presented method and algorithms are evaluated on several exemplary biological models and on a real case study.
Replication, refinement & reachability: complexity in dynamic condition-response graphs
DEFF Research Database (Denmark)
Debois, Søren; Hildebrandt, Thomas T.; Slaats, Tijs
2017-01-01
We explore the complexity of reachability and run-time refinement under safety and liveness constraints in event-based process models. Our study is framed in the DCR? process language, which supports modular specification through a compositional operational semantics. DCR? encompasses the “Dynami...
Reachability analysis for timed automata using max-plus algebra
DEFF Research Database (Denmark)
Lu, Qi; Madsen, Michael; Milata, Martin
2012-01-01
We show that max-plus polyhedra are usable as a data structure in reachability analysis of timed automata. Drawing inspiration from the extensive work that has been done on difference bound matrices, as well as previous work on max-plus polyhedra in other areas, we develop the algorithms needed...
Optimal Conditional Reachability for Multi-Priced Timed Automata
DEFF Research Database (Denmark)
Larsen, Kim Guldstrand; Rasmussen, Jacob Illum
2005-01-01
In this paper, we prove decidability of the optimal conditional reachability problem for multi-priced timed automata, an extension of timed automata with multiple cost variables evolving according to given rates for each location. More precisely, we consider the problem of determining the minimal...
Reachability Trees for High-level Petri Nets
DEFF Research Database (Denmark)
Jensen, Kurt; Jensen, Arne M.; Jepsen, Leif Obel
1986-01-01
the necessary analysis methods. In other papers it is shown how to generalize the concept of place- and transition invariants from place/transition nets to high-level Petri nets. Our present paper contributes to this with a generalization of reachability trees, which is one of the other important analysis...
Symbolic Reachability for Process Algebras with Recursive Data Types
Blom, Stefan; van de Pol, Jan Cornelis; Fitzgerald, J.S.; Haxthausen, A.E.; Yenigun, H.
2008-01-01
In this paper, we present a symbolic reachability algorithm for process algebras with recursive data types. Like the various saturation based algorithms of Ciardo et al, the algorithm is based on partitioning of the transition relation into events whose influence is local. As new features, our
Computing Convex Coverage Sets for Faster Multi-Objective Coordination
Roijers, D.M.; Whiteson, S.; Oliehoek, F.A.
2015-01-01
In this article, we propose new algorithms for multi-objective coordination graphs (MO-CoGs). Key to the efficiency of these algorithms is that they compute a convex coverage set (CCS) instead of a Pareto coverage set (PCS). Not only is a CCS a sufficient solution set for a large class of problems,
Sparse Dataflow Analysis with Pointers and Reachability
DEFF Research Database (Denmark)
Madsen, Magnus; Møller, Anders
2014-01-01
quadtrees. The framework is presented as a systematic modification of a traditional dataflow analysis algorithm. Our experimental results demonstrate the effectiveness of the technique for a suite of JavaScript programs. By also comparing the performance with an idealized staged approach that computes...
Winning Concurrent Reachability Games Requires Doubly-Exponential Patience
DEFF Research Database (Denmark)
Hansen, Kristoffer Arnsfelt; Koucký, Michal; Miltersen, Peter Bro
2009-01-01
We exhibit a deterministic concurrent reachability game PURGATORYn with n non-terminal positions and a binary choice for both players in every position so that any positional strategy for Player 1 achieving the value of the game within given isin ... that are less than (isin2/(1 - isin))2n-2 . Also, even to achieve the value within say 1 - 2-n/2, doubly exponentially small behavior probabilities in the number of positions must be used. This behavior is close to worst case: We show that for any such game and 0 ... with all non-zero behavior probabilities being 20(n) at least isin2O(n). As a corollary to our results, we conclude that any (deterministic or nondeterministic) algorithm that given a concurrent reachability game explicitly manipulates isin-optimal strategies for Player 1 represented in several standard...
Monomial strategies for concurrent reachability games and other stochastic games
DEFF Research Database (Denmark)
Frederiksen, Søren Kristoffer Stiil; Miltersen, Peter Bro
2013-01-01
We consider two-player zero-sum finite (but infinite-horizon) stochastic games with limiting average payoffs. We define a family of stationary strategies for Player I parameterized by ε > 0 to be monomial, if for each state k and each action j of Player I in state k except possibly one action, we...... have that the probability of playing j in k is given by an expression of the form c ε d for some non-negative real number c and some non-negative integer d. We show that for all games, there is a monomial family of stationary strategies that are ε-optimal among stationary strategies. A corollary...... is that all concurrent reachability games have a monomial family of ε-optimal strategies. This generalizes a classical result of de Alfaro, Henzinger and Kupferman who showed that this is the case for concurrent reachability games where all states have value 0 or 1....
Mutual proximity graphs for improved reachability in music recommendation.
Flexer, Arthur; Stevens, Jeff
2018-01-01
This paper is concerned with the impact of hubness, a general problem of machine learning in high-dimensional spaces, on a real-world music recommendation system based on visualisation of a k-nearest neighbour (knn) graph. Due to a problem of measuring distances in high dimensions, hub objects are recommended over and over again while anti-hubs are nonexistent in recommendation lists, resulting in poor reachability of the music catalogue. We present mutual proximity graphs, which are an alternative to knn and mutual knn graphs, and are able to avoid hub vertices having abnormally high connectivity. We show that mutual proximity graphs yield much better graph connectivity resulting in improved reachability compared to knn graphs, mutual knn graphs and mutual knn graphs enhanced with minimum spanning trees, while simultaneously reducing the negative effects of hubness.
Improved Undecidability Results for Reachability Games on Recursive Timed Automata
Directory of Open Access Journals (Sweden)
Shankara Narayanan Krishna
2014-08-01
Full Text Available We study reachability games on recursive timed automata (RTA that generalize Alur-Dill timed automata with recursive procedure invocation mechanism similar to recursive state machines. It is known that deciding the winner in reachability games on RTA is undecidable for automata with two or more clocks, while the problem is decidable for automata with only one clock. Ouaknine and Worrell recently proposed a time-bounded theory of real-time verification by claiming that restriction to bounded-time recovers decidability for several key decision problem related to real-time verification. We revisited games on recursive timed automata with time-bounded restriction in the hope of recovering decidability. However, we found that the problem still remains undecidable for recursive timed automata with three or more clocks. Using similar proof techniques we characterize a decidability frontier for a generalization of RTA to recursive stopwatch automata.
Methods for Reachability-based Hybrid Controller Design
2012-05-10
approaches for airport runways ( Teo and Tomlin, 2003). The results of the reachability calculations were validated in extensive simulations as well as...UAV flight experiments (Jang and Tomlin, 2005; Teo , 2005). While the focus of these previous applications lies largely in safety verification, the work...B([15, 0],a0)× [−π,π])\\ V,∀qi ∈ Q, where a0 = 30m is the protected radius (chosen based upon published data of the wingspan of a Boeing KC -135
Cumulative hierarchies and computability over universes of sets
Directory of Open Access Journals (Sweden)
Domenico Cantone
2008-05-01
Full Text Available Various metamathematical investigations, beginning with Fraenkel’s historical proof of the independence of the axiom of choice, called for suitable deﬁnitions of hierarchical universes of sets. This led to the discovery of such important cumulative structures as the one singled out by von Neumann (generally taken as the universe of all sets and Godel’s universe of the so-called constructibles. Variants of those are exploited occasionally in studies concerning the foundations of analysis (according to Abraham Robinson’s approach, or concerning non-well-founded sets. We hence offer a systematic presentation of these many structures, partly motivated by their relevance and pervasiveness in mathematics. As we report, numerous properties of hierarchy-related notions such as rank, have been veriﬁed with the assistance of the ÆtnaNova proof-checker.Through SETL and Maple implementations of procedures which effectively handle the Ackermann’s hereditarily ﬁnite sets, we illustrate a particularly signiﬁcant case among those in which the entities which form a universe of sets can be algorithmically constructed and manipulated; hereby, the fruitful bearing on pure mathematics of cumulative set hierarchies ramiﬁes into the realms of theoretical computer science and algorithmics.
Data Sets, Ensemble Cloud Computing, and the University Library (Invited)
Plale, B. A.
2013-12-01
The environmental researcher at the public university has new resources at their disposal to aid in research and publishing. Cloud computing provides compute cycles on demand for analysis and modeling scenarios. Cloud computing is attractive for e-Science because of the ease with which cores can be accessed on demand, and because the virtual machine implementation that underlies cloud computing reduces the cost of porting a numeric or analysis code to a new platform. At the university, many libraries at larger universities are developing the e-Science skills to serve as repositories of record for publishable data sets. But these are confusing times for the publication of data sets from environmental research. The large publishers of scientific literature are advocating a process whereby data sets are tightly tied to a publication. In other words, a paper published in the scientific literature that gives results based on data, must have an associated data set accessible that backs up the results. This approach supports reproducibility of results in that publishers maintain a repository for the papers they publish, and the data sets that the papers used. Does such a solution that maps one data set (or subset) to one paper fit the needs of the environmental researcher who among other things uses complex models, mines longitudinal data bases, and generates observational results? The second school of thought has emerged out of NSF, NOAA, and NASA funded efforts over time: data sets exist coherent at a location, such as occurs at National Snow and Ice Data Center (NSIDC). But when a collection is coherent, reproducibility of individual results is more challenging. We argue for a third complementary option: the university repository as a location for data sets produced as a result of university-based research. This location for a repository relies on the expertise developing in the university libraries across the country, and leverages tools, such as are being developed
The effect of response-delay on estimating reachability.
Gabbard, Carl; Ammar, Diala
2008-11-01
The experiment was conducted to compare visual imagery (VI) and motor imagery (MI) reaching tasks in a response-delay paradigm designed to explore the hypothesized dissociation between vision for perception and vision for action. Although the visual systems work cooperatively in motor control, theory suggests that they operate under different temporal constraints. From this perspective, we expected that delay would affect MI but not VI because MI operates in real time and VI is postulated to be memory-driven. Following measurement of actual reach, right-handers were presented seven (imagery) targets at midline in eight conditions: MI and VI with 0-, 1-, 2-, and 4-s delays. Results indicted that delay affected the ability to estimate reachability with MI but not with VI. These results are supportive of a general distinction between vision for perception and vision for action.
Computational Study on a PTAS for Planar Dominating Set Problem
Directory of Open Access Journals (Sweden)
Qian-Ping Gu
2013-01-01
Full Text Available The dominating set problem is a core NP-hard problem in combinatorial optimization and graph theory, and has many important applications. Baker [JACM 41,1994] introduces a k-outer planar graph decomposition-based framework for designing polynomial time approximation scheme (PTAS for a class of NP-hard problems in planar graphs. It is mentioned that the framework can be applied to obtain an O(2ckn time, c is a constant, (1+1/k-approximation algorithm for the planar dominating set problem. We show that the approximation ratio achieved by the mentioned application of the framework is not bounded by any constant for the planar dominating set problem. We modify the application of the framework to give a PTAS for the planar dominating set problem. With k-outer planar graph decompositions, the modified PTAS has an approximation ratio (1 + 2/k. Using 2k-outer planar graph decompositions, the modified PTAS achieves the approximation ratio (1+1/k in O(22ckn time. We report a computational study on the modified PTAS. Our results show that the modified PTAS is practical.
Liveness and Reachability Analysis of BPMN Process Models
Directory of Open Access Journals (Sweden)
Anass Rachdi
2016-06-01
Full Text Available Business processes are usually defined by business experts who require intuitive and informal graphical notations such as BPMN (Business Process Management Notation for documenting and communicating their organization activities and behavior. However, BPMN has not been provided with a formal semantics, which limits the analysis of BPMN models to using solely informal techniques such as simulation. In order to address this limitation and use formal verification, it is necessary to define a certain “mapping” between BPMN and a formal language such as Concurrent Sequential Processes (CSP and Petri Nets (PN. This paper proposes a method for the verification of BPMN models by defining formal semantics of BPMN in terms of a mapping to Time Petri Nets (TPN, which are equipped with very efficient analytical techniques. After the translation of BPMN models to TPN, verification is done to ensure that some functional properties are satisfied by the model under investigation, namely liveness and reachability properties. The main advantage of our approach over existing ones is that it takes into account the time components in modeling Business process models. An example is used throughout the paper to illustrate the proposed method.
Sensory emission rates from personal computers and television sets
DEFF Research Database (Denmark)
Wargocki, Pawel; Bako-Biro, Zsolt; Baginska, S.
2003-01-01
Sensory emissions from personal computers (PCs), PC monitors + PC towers, and television sets (TVs) having been in operation for 50, 400 and 600 h were assessed by a panel of 48 subjects. One brand of PC tower and four brands of PC monitors were tested. Within each brand, cathode-ray tube (CRT......) and thin-flat-transistor (TFT) monitors were selected. Two brands of TVs were tested. All brands are prevalent on the world market. The assessments were conducted in low-polluting 40 m3 test offices ventilated with a constant outdoor air change rate of 1.3 ± 0.2 h–1 corresponding to 7 L/s per PC or TV...... with two units placed at a time in the test offices; air temperature was controlled at 22 ± 0.1°C and relative humidity at 41 ± 0.5%. The subjects entered the offices individually and immediately assessed the air quality. They did not see the PCs or TVs that were placed behind a screen and were...
Payroll. Computer Module for Use in a Mathematics Laboratory Setting.
Barker, Karen; And Others
This is one of a series of computer modules designed for use by secondary students who have access to a computer. The module, designed to help students understand various aspects of payroll calculation, includes a statement of objectives, a time schedule, a list of materials, an outline for each section, and several computer programs. (MK)
Link-Based Similarity Measures Using Reachability Vectors
Directory of Open Access Journals (Sweden)
Seok-Ho Yoon
2014-01-01
Full Text Available We present a novel approach for computing link-based similarities among objects accurately by utilizing the link information pertaining to the objects involved. We discuss the problems with previous link-based similarity measures and propose a novel approach for computing link based similarities that does not suffer from these problems. In the proposed approach each target object is represented by a vector. Each element of the vector corresponds to all the objects in the given data, and the value of each element denotes the weight for the corresponding object. As for this weight value, we propose to utilize the probability of reaching from the target object to the specific object, computed using the “Random Walk with Restart” strategy. Then, we define the similarity between two objects as the cosine similarity of the two vectors. In this paper, we provide examples to show that our approach does not suffer from the aforementioned problems. We also evaluate the performance of the proposed methods in comparison with existing link-based measures, qualitatively and quantitatively, with respect to two kinds of data sets, scientific papers and Web documents. Our experimental results indicate that the proposed methods significantly outperform the existing measures.
Windows VPN Set Up | High-Performance Computing | NREL
Windows VPN Set Up Windows VPN Set Up To set up Windows for HPC VPN, here are the steps: Download your version of Windows. Note: We only support the the Endian Connect software when connecting to the a VPN connection to the HPC systems. Windows Version Connect App Windows 10
Invariant set computation for constrained uncertain discrete-time systems
Athanasopoulos, N.; Bitsoris, G.
2010-01-01
In this article a novel approach to the determination of polytopic invariant sets for constrained discrete-time linear uncertain systems is presented. First, the problem of stabilizing a prespecified initial condition set in the presence of input and state constraints is addressed. Second, the
Directory of Open Access Journals (Sweden)
Eric Psota
2010-01-01
Full Text Available The error mechanisms of iterative message-passing decoders for low-density parity-check codes are studied. A tutorial review is given of the various graphical structures, including trapping sets, stopping sets, and absorbing sets that are frequently used to characterize the errors observed in simulations of iterative decoding of low-density parity-check codes. The connections between trapping sets and deviations on computation trees are explored in depth using the notion of problematic trapping sets in order to bridge the experimental and analytic approaches to these error mechanisms. A new iterative algorithm for finding low-weight problematic trapping sets is presented and shown to be capable of identifying many trapping sets that are frequently observed during iterative decoding of low-density parity-check codes on the additive white Gaussian noise channel. Finally, a new method is given for characterizing the weight of deviations that result from problematic trapping sets.
Computing autocatalytic sets to unravel inconsistencies in metabolic network reconstructions
DEFF Research Database (Denmark)
Schmidt, R.; Waschina, S.; Boettger-Schmidt, D.
2015-01-01
, the method we report represents a powerful tool to identify inconsistencies in large-scale metabolic networks. AVAILABILITY AND IMPLEMENTATION: The method is available as source code on http://users.minet.uni-jena.de/ approximately m3kach/ASBIG/ASBIG.zip. CONTACT: christoph.kaleta@uni-jena.de SUPPLEMENTARY...... by inherent inconsistencies and gaps. RESULTS: Here we present a novel method to validate metabolic network reconstructions based on the concept of autocatalytic sets. Autocatalytic sets correspond to collections of metabolites that, besides enzymes and a growth medium, are required to produce all biomass...... components in a metabolic model. These autocatalytic sets are well-conserved across all domains of life, and their identification in specific genome-scale reconstructions allows us to draw conclusions about potential inconsistencies in these models. The method is capable of detecting inconsistencies, which...
Directory of Open Access Journals (Sweden)
Risberg Daniel
2017-01-01
Full Text Available In this paper CFD was used for simulation of the indoor climate in a part of a low energy building. The focus of the work was on investigating the computational set up, such as grid size and boundary conditions in order to solve the indoor climate problems in an accurate way. Future work is to model a complete building, with reasonable calculation time and accuracy. A limited number of grid elements and knowledge of boundary settings are therefore essential. An accurate grid edge size of around 0.1 m was enough to predict the climate according to a grid independency study. Different turbulence models were compared with only small differences in the indoor air velocities and temperatures. The models show that radiation between building surfaces has a large impact on the temperature field inside the building, with the largest differences at the floor level. Simplifying the simulations by modelling the radiator as a surface in the outer wall of the room is appropriate for the calculations. The overall indoor climate is finally compared between three different cases for the outdoor air temperature. The results show a good indoor climate for a low energy building all around the year.
A Memory and Computation Efficient Sparse Level-Set Method
Laan, Wladimir J. van der; Jalba, Andrei C.; Roerdink, Jos B.T.M.
Since its introduction, the level set method has become the favorite technique for capturing and tracking moving interfaces, and found applications in a wide variety of scientific fields. In this paper we present efficient data structures and algorithms for tracking dynamic interfaces through the
The Reach-and-Evolve Algorithm for Reachability Analysis of Nonlinear Dynamical Systems
P.J. Collins (Pieter); A. Goldsztejn
2008-01-01
htmlabstractThis paper introduces a new algorithm dedicated to the rigorous reachability analysis of nonlinear dynamical systems. The algorithm is initially presented in the context of discrete time dynamical systems, and then extended to continuous time dynamical systems driven by ODEs. In
A Forward Reachability Algorithm for Bounded Timed-Arc Petri Nets
DEFF Research Database (Denmark)
David, Alexandre; Jacobsen, Lasse; Jacobsen, Morten
2012-01-01
Timed-arc Petri nets (TAPN) are a well-known time extension of thePetri net model and several translations to networks of timedautomata have been proposed for this model.We present a direct, DBM-basedalgorithm for forward reachability analysis of bounded TAPNs extended with transport arcs...
Testing reachability and stabilizability of systems over polynomial rings using Gröbner bases
Habets, L.C.G.J.M.
1993-01-01
Conditions for the reachability and stabilizability of systems over polynomial rings are well-known in the literature. For a system $ \\Sigma = (A,B)$ they can be expressed as right-invertibility cconditions on the matrix $(zI - A \\mid B)$. Therefore there is quite a strong algebraic relationship
The Role of Haptic Exploration of Ground Surface Information in Perception of Overhead Reachability
Pepping, Gert-Jan; Li, Francois-Xavier
2008-01-01
The authors performed an experiment in which participants (N = 24) made judgments about maximum jump and reachability on ground surfaces with different elastic properties: sand and a trampoline. Participants performed judgments in two conditions: (a) while standing and after having recently jumped
A reachability test for systems over polynomial rings using Gröbner bases
Habets, L.C.G.J.M.
1992-01-01
Conditions for the reachability of a system over a polynomial ring are well known in the literature. However, the verification of these conditions remained a difficult problem in general. Application of the Gröbner Basis method from constructive commutative algebra makes it possible to carry out
Computing Preferred Extensions for Argumentation Systems with Sets of Attacking
DEFF Research Database (Denmark)
Nielsen, Søren Holbech; Parsons, Simon
2006-01-01
The hitherto most abstract, and hence general, argumentation system, is the one described by Dung in a paper from 1995. This framework does not allow for joint attacks on arguments, but in a recent paper we adapted it to support such attacks, and proved that this adapted framework enjoyed the same...... formal properties as that of Dung. One problem posed by Dung's original framework, which was neglected for some time, is how to compute preferred extensions of the argumentation systems. However, in 2001, in a paper by Doutre and Mengin, a procedure was given for enumerating preferred extensions...... for these systems. In this paper we propose a method for enumerating preferred extensions of the potentially more complex systems, where joint attacks are allowed. The method is inspired by the one given by Doutre and Mengin....
Evaluation of Secure Computation in a Distributed Healthcare Setting.
Kimura, Eizen; Hamada, Koki; Kikuchi, Ryo; Chida, Koji; Okamoto, Kazuya; Manabe, Shirou; Kuroda, Tomohiko; Matsumura, Yasushi; Takeda, Toshihiro; Mihara, Naoki
2016-01-01
Issues related to ensuring patient privacy and data ownership in clinical repositories prevent the growth of translational research. Previous studies have used an aggregator agent to obscure clinical repositories from the data user, and to ensure the privacy of output using statistical disclosure control. However, there remain several issues that must be considered. One such issue is that a data breach may occur when multiple nodes conspire. Another is that the agent may eavesdrop on or leak a user's queries and their results. We have implemented a secure computing method so that the data used by each party can be kept confidential even if all of the other parties conspire to crack the data. We deployed our implementation at three geographically distributed nodes connected to a high-speed layer two network. The performance of our method, with respect to processing times, suggests suitability for practical use.
On the sighting of unicorns: A variational approach to computing invariant sets in dynamical systems
Junge, Oliver; Kevrekidis, Ioannis G.
2017-06-01
We propose to compute approximations to invariant sets in dynamical systems by minimizing an appropriate distance between a suitably selected finite set of points and its image under the dynamics. We demonstrate, through computational experiments, that this approach can successfully converge to approximations of (maximal) invariant sets of arbitrary topology, dimension, and stability, such as, e.g., saddle type invariant sets with complicated dynamics. We further propose to extend this approach by adding a Lennard-Jones type potential term to the objective function, which yields more evenly distributed approximating finite point sets, and illustrate the procedure through corresponding numerical experiments.
Directory of Open Access Journals (Sweden)
Hongliang Zhu
2018-01-01
Full Text Available With the development of cloud computing, the advantages of low cost and high computation ability meet the demands of complicated computation of multimedia processing. Outsourcing computation of cloud could enable users with limited computing resources to store and process distributed multimedia application data without installing multimedia application software in local computer terminals, but the main problem is how to protect the security of user data in untrusted public cloud services. In recent years, the privacy-preserving outsourcing computation is one of the most common methods to solve the security problems of cloud computing. However, the existing computation cannot meet the needs for the large number of nodes and the dynamic topologies. In this paper, we introduce a novel privacy-preserving outsourcing computation method which combines GM homomorphic encryption scheme and Bloom filter together to solve this problem and propose a new privacy-preserving outsourcing set intersection computation protocol. Results show that the new protocol resolves the privacy-preserving outsourcing set intersection computation problem without increasing the complexity and the false positive probability. Besides, the number of participants, the size of input secret sets, and the online time of participants are not limited.
Out-of-Core Computations of High-Resolution Level Sets by Means of Code Transformation
DEFF Research Database (Denmark)
Christensen, Brian Bunch; Nielsen, Michael Bang; Museth, Ken
2012-01-01
We propose a storage efficient, fast and parallelizable out-of-core framework for streaming computations of high resolution level sets. The fundamental techniques are skewing and tiling transformations of streamed level set computations which allow for the combination of interface propagation, re...... computations are now CPU bound and consequently the overall performance is unaffected by disk latency and bandwidth limitations. We demonstrate this with several benchmark tests that show sustained out-of-core throughputs close to that of in-core level set simulations....
Cascading effect of contagion in Indian stock market: Evidence from reachable stocks
Directory of Open Access Journals (Sweden)
Rajan Sruthi
2017-12-01
Full Text Available The financial turbulence in a country percolates to another along the trajectories of reachable stocks owned by foreign investors. To indemnify the losses originating from the crisis country, foreign investors dispose of shares in other markets triggering a contagion in an unrelated market. This paper provides empirical evidence for the stock market crisis that spreads globally through investors owning international portfolios, with special reference to the global financial crisis of 2008–09. Using two-step Limited Information Maximum Likelihood estimation and Murphy-Topel variance estimate, the results show that reachability plays a crucial role in the transposal of distress from one country to another, explaining investor-induced contagion in the Indian stock market.
Approximating the Value of a Concurrent Reachability Game in the Polynomial Time Hierarchy
DEFF Research Database (Denmark)
Frederiksen, Søren Kristoffer Stiil; Miltersen, Peter Bro
2013-01-01
We show that the value of a finite-state concurrent reachability game can be approximated to arbitrary precision in TFNP[NP], that is, in the polynomial time hierarchy. Previously, no better bound than PSPACE was known for this problem. The proof is based on formulating a variant of the state red...... reduction algorithm for Markov chains using arbitrary precision floating point arithmetic and giving a rigorous error analysis of the algorithm.......We show that the value of a finite-state concurrent reachability game can be approximated to arbitrary precision in TFNP[NP], that is, in the polynomial time hierarchy. Previously, no better bound than PSPACE was known for this problem. The proof is based on formulating a variant of the state...
Jung, Ji-Young; Seo, Dong-Yoon; Lee, Jung-Ryun
2018-01-04
A wireless sensor network (WSN) is emerging as an innovative method for gathering information that will significantly improve the reliability and efficiency of infrastructure systems. Broadcast is a common method to disseminate information in WSNs. A variety of counter-based broadcast schemes have been proposed to mitigate the broadcast-storm problems, using the count threshold value and a random access delay. However, because of the limited propagation of the broadcast-message, there exists a trade-off in a sense that redundant retransmissions of the broadcast-message become low and energy efficiency of a node is enhanced, but reachability become low. Therefore, it is necessary to study an efficient counter-based broadcast scheme that can dynamically adjust the random access delay and count threshold value to ensure high reachability, low redundant of broadcast-messages, and low energy consumption of nodes. Thus, in this paper, we first measure the additional coverage provided by a node that receives the same broadcast-message from two neighbor nodes, in order to achieve high reachability with low redundant retransmissions of broadcast-messages. Second, we propose a new counter-based broadcast scheme considering the size of the additional coverage area, distance between the node and the broadcasting node, remaining battery of the node, and variations of the node density. Finally, we evaluate performance of the proposed scheme compared with the existing counter-based broadcast schemes. Simulation results show that the proposed scheme outperforms the existing schemes in terms of saved rebroadcasts, reachability, and total energy consumption.
Splitting Computation of Answer Set Program and Its Application on E-service
Directory of Open Access Journals (Sweden)
Bo Yang
2011-10-01
Full Text Available As a primary means for representing and reasoning about knowledge, Answer Set Programming (ASP has been applying in many areas such as planning, decision making, fault diagnosing and increasingly prevalent e-service. Based on the stable model semantics of logic programming, ASP can be used to solve various combinatorial search problems by finding the answer sets of logic programs which declaratively describe the problems. Itrs not an easy task to compute answer sets of a logic program using Gelfond and Lifschitzrs definition directly. In this paper, we show some results on characterization of answer sets of a logic program with constraints, and propose a way to split a program into several non-intersecting parts step by step, thus the computation of answer sets for every subprogram becomes relatively easy. To instantiate our splitting computation theory, an example about personalized product configuration in e-retailing is given to show the effectiveness of our method.
On the existence of continuous selections of solution and reachable ...
African Journals Online (AJOL)
We prove that the map that associates to the initial value the set of solutions to the Lipschitzian Quantum Stochastic Differential Inclusion (QSDI) admits a selection continuous from the locally convex space of stochastic processes to the adapted and weakly absolutely continuous space of solutions. As a corollary, we show ...
Spielberg, Freya; Kurth, Ann E; Severynen, Anneleen; Hsieh, Yu-Hsiang; Moring-Parris, Daniel; Mackenzie, Sara; Rothman, Richard
2011-06-01
Providers in emergency care settings (ECSs) often face barriers to expanded HIV testing. We undertook formative research to understand the potential utility of a computer tool, "CARE," to facilitate rapid HIV testing in ECSs. Computer tool usability and acceptability were assessed among 35 adult patients, and provider focus groups were held, in two ECSs in Washington State and Maryland. The computer tool was usable by patients of varying computer literacy. Patients appreciated the tool's privacy and lack of judgment and their ability to reflect on HIV risks and create risk reduction plans. Staff voiced concerns regarding ECS-based HIV testing generally, including resources for follow-up of newly diagnosed people. Computer-delivered HIV testing support was acceptable and usable among low-literacy populations in two ECSs. Such tools may help circumvent some practical barriers associated with routine HIV testing in busy settings though linkages to care will still be needed.
Theory and computation of disturbance invariant sets for discrete-time linear systems
Directory of Open Access Journals (Sweden)
Kolmanovsky Ilya
1998-01-01
Full Text Available This paper considers the characterization and computation of invariant sets for discrete-time, time-invariant, linear systems with disturbance inputs whose values are confined to a specified compact set but are otherwise unknown. The emphasis is on determining maximal disturbance-invariant sets X that belong to a specified subset Γ of the state space. Such d-invariant sets have important applications in control problems where there are pointwise-in-time state constraints of the form χ ( t ∈ Γ . One purpose of the paper is to unite and extend in a rigorous way disparate results from the prior literature. In addition there are entirely new results. Specific contributions include: exploitation of the Pontryagin set difference to clarify conceptual matters and simplify mathematical developments, special properties of maximal invariant sets and conditions for their finite determination, algorithms for generating concrete representations of maximal invariant sets, practical computational questions, extension of the main results to general Lyapunov stable systems, applications of the computational techniques to the bounding of state and output response. Results on Lyapunov stable systems are applied to the implementation of a logic-based, nonlinear multimode regulator. For plants with disturbance inputs and state-control constraints it enlarges the constraint-admissible domain of attraction. Numerical examples illustrate the various theoretical and computational results.
TimeSet: A computer program that accesses five atomic time services on two continents
Petrakis, P. L.
1993-01-01
TimeSet is a shareware program for accessing digital time services by telephone. At its initial release, it was capable of capturing time signals only from the U.S. Naval Observatory to set a computer's clock. Later the ability to synchronize with the National Institute of Standards and Technology was added. Now, in Version 7.10, TimeSet is able to access three additional telephone time services in Europe - in Sweden, Austria, and Italy - making a total of five official services addressable by the program. A companion program, TimeGen, allows yet another source of telephone time data strings for callers equipped with TimeSet version 7.10. TimeGen synthesizes UTC time data strings in the Naval Observatory's format from an accurately set and maintained DOS computer clock, and transmits them to callers. This allows an unlimited number of 'freelance' time generating stations to be created. Timesetting from TimeGen is made feasible by the advent of Becker's RighTime, a shareware program that learns the drift characteristics of a computer's clock and continuously applies a correction to keep it accurate, and also brings .01 second resolution to the DOS clock. With clock regulation by RighTime and periodic update calls by the TimeGen station to an official time source via TimeSet, TimeGen offers the same degree of accuracy within the resolution of the computer clock as any official atomic time source.
Analytical bounds on SET charge sensitivity for qubit readout in a solid-state quantum computer
International Nuclear Information System (INIS)
Green, F.; Buehler, T.M.; Brenner, R.; Hamilton, A.R.; Dzurak, A.S.; Clark, R.G.
2002-01-01
Full text: Quantum Computing promises processing powers orders of magnitude beyond what is possible in conventional silicon-based computers. It harnesses the laws of quantum mechanics directly, exploiting the in built potential of a wave function for massively parallel information processing. Highly ordered and scaleable arrays of single donor atoms (quantum bits, or qubits), embedded in Si, are especially promising; they are a very natural fit to the existing, highly sophisticated, Si industry. The success of Si-based quantum computing depends on precisely initializing the quantum state of each qubit, and on precise reading out its final form. In the Kane architecture the qubit states are read out by detecting the spatial distribution of the donor's electron cloud using a sensitive electrometer. The single-electron transistor (SET) is an attractive candidate readout device for this, since the capacitive, or charging, energy of a SET's metallic central island is exquisitely sensitive to its electronic environment. Use of SETs as high-performance electrometers is therefore a key technology for data transfer in a solid-state quantum computer. We present an efficient analytical method to obtain bounds on the charge sensitivity of a single electron transistor (SET). Our classic Green-function analysis provides reliable estimates of SET sensitivity optimizing the design of the readout hardware. Typical calculations, and their physical meaning, are discussed. We compare them with the measured SET-response data
Theory and computation of disturbance invariant sets for discrete-time linear systems
Directory of Open Access Journals (Sweden)
Ilya Kolmanovsky
1998-01-01
. One purpose of the paper is to unite and extend in a rigorous way disparate results from the prior literature. In addition there are entirely new results. Specific contributions include: exploitation of the Pontryagin set difference to clarify conceptual matters and simplify mathematical developments, special properties of maximal invariant sets and conditions for their finite determination, algorithms for generating concrete representations of maximal invariant sets, practical computational questions, extension of the main results to general Lyapunov stable systems, applications of the computational techniques to the bounding of state and output response. Results on Lyapunov stable systems are applied to the implementation of a logic-based, nonlinear multimode regulator. For plants with disturbance inputs and state-control constraints it enlarges the constraint-admissible domain of attraction. Numerical examples illustrate the various theoretical and computational results.
Computing the Pareto-Nash equilibrium set in finite multi-objective mixed-strategy games
Directory of Open Access Journals (Sweden)
Victoria Lozan
2013-10-01
Full Text Available The Pareto-Nash equilibrium set (PNES is described as intersection of graphs of efficient response mappings. The problem of PNES computing in finite multi-objective mixed-strategy games (Pareto-Nash games is considered. A method for PNES computing is studied. Mathematics Subject Classification 2010: 91A05, 91A06, 91A10, 91A43, 91A44.
Robust fault detection of linear systems using a computationally efficient set-membership method
DEFF Research Database (Denmark)
Tabatabaeipour, Mojtaba; Bak, Thomas
2014-01-01
In this paper, a computationally efficient set-membership method for robust fault detection of linear systems is proposed. The method computes an interval outer-approximation of the output of the system that is consistent with the model, the bounds on noise and disturbance, and the past measureme...... is trivially parallelizable. The method is demonstrated for fault detection of a hydraulic pitch actuator of a wind turbine. We show the effectiveness of the proposed method by comparing our results with two zonotope-based set-membership methods....
Solving large sets of coupled equations iteratively by vector processing on the CYBER 205 computer
International Nuclear Information System (INIS)
Tolsma, L.D.
1985-01-01
The set of coupled linear second-order differential equations which has to be solved for the quantum-mechanical description of inelastic scattering of atomic and nuclear particles can be rewritten as an equivalent set of coupled integral equations. When some type of functions is used as piecewise analytic reference solutions, the integrals that arise in this set can be evaluated analytically. The set of integral equations can be solved iteratively. For the results mentioned an inward-outward iteration scheme has been applied. A concept of vectorization of coupled-channel Fortran programs, based on this integral method, is presented for the use on the Cyber 205 computer. It turns out that, for two heavy ion nuclear scattering test cases, this vector algorithm gives an overall speed-up of about a factor of 2 to 3 compared to a highly optimized scalar algorithm for a one vector pipeline computer
Re-verification of a Lip Synchronization Protocol using Robust Reachability
Directory of Open Access Journals (Sweden)
Piotr Kordy
2010-03-01
Full Text Available The timed automata formalism is an important model for specifying and analysing real-time systems. Robustness is the correctness of the model in the presence of small drifts on clocks or imprecision in testing guards. A symbolic algorithm for the analysis of the robustness of timed automata has been implemented. In this paper, we re-analyse an industrial case lip synchronization protocol using the new robust reachability algorithm. This lip synchronization protocol is an interesting case because timing aspects are crucial for the correctness of the protocol. Several versions of the model are considered: with an ideal video stream, with anchored jitter, and with non-anchored jitter.
Female Students' Experiences of Computer Technology in Single- versus Mixed-Gender School Settings
Burke, Lee-Ann; Murphy, Elizabeth
2006-01-01
This study explores how female students compare learning computer technology in a single- versus a mixed- gender school setting. Twelve females participated, all of whom were enrolled in a grade 12 course in Communications' Technology. Data collection included a questionnaire, a semi-structured interview and focus groups. Participants described…
The Potential of Computer-Based Expert Systems for Special Educators in Rural Settings.
Parry, James D.; Ferrara, Joseph M.
Knowledge-based expert computer systems are addressing issues relevant to all special educators, but are particularly relevant in rural settings where human experts are less available because of distance and cost. An expert system is an application of artificial intelligence (AI) that typically engages the user in a dialogue resembling the…
Barriers to the Integration of Computers in Early Childhood Settings: Teachers' Perceptions
Nikolopoulou, Kleopatra; Gialamas, Vasilis
2015-01-01
This study investigated teachers' perceptions of barriers to using - integrating computers in early childhood settings. A 26-item questionnaire was administered to 134 early childhood teachers in Greece. Lack of funding, lack of technical and administrative support, as well as inadequate training opportunities were among the major perceived…
Iachini, Tina; Ruggiero, Gennaro; Ruotolo, Francesco; Schiano di Cola, Armando; Senese, Vincenzo Paolo
2015-09-01
Although the effects of several personality factors on interpersonal space (i.e. social space within personal comfort area) are well documented, it is not clear whether they also extend to peripersonal space (i.e. reaching space). Indeed, no study has directly compared these spaces in relation to personality and anxiety factors even though such a comparison would help to clarify to what extent they share similar mechanisms and characteristics. The aim of the present paper was to investigate whether personality dimensions and anxiety levels are associated with reaching and comfort distances. Seventy university students (35 females) were administered the Big Five Questionnaire and the State-Trait Anxiety Inventory; afterwards, they had to provide reachability- and comfort-distance judgments towards human confederates while standing still (passive) or walking towards them (active). The correlation analyses showed that both spaces were positively related to anxiety and negatively correlated with the Dynamism in the active condition. Moreover, in the passive condition higher Emotional Stability was related to shorter comfort distance, while higher cognitive Openness was associated with shorter reachability distance. The implications of these results are discussed.
DEFF Research Database (Denmark)
Iris, Cagatay; Pacino, Dario; Røpke, Stefan
2015-01-01
Most of the operational problems in container terminals are strongly interconnected. In this paper, we study the integrated Berth Allocation and Quay Crane Assignment Problem in seaport container terminals. We will extend the current state-of-the-art by proposing novel set partitioning models....... To improve the performance of the set partitioning formulations, a number of variable reduction techniques are proposed. Furthermore, we analyze the effects of different discretization schemes and the impact of using a time-variant/invariant quay crane allocation policy. Computational experiments show...
Mobile computing in the humanitarian assistance setting: an introduction and some first steps.
Selanikio, Joel D; Kemmer, Teresa M; Bovill, Maria; Geisler, Karen
2002-04-01
We developed a Palm operating system-based handheld computer system for admin istering nutrition questionnaires and used it to gather nutritional information among the Burmese refugees in the Mae La refugee camp on the Thai-Burma border Our experience demonstrated that such technology can be easily adapted for such an austere setting and used to great advantage. Further, the technology showed tremendous potential to reduce both time required and errors commonly encountered when field staff collect information in the humanitarian setting. We also identified several areas needing further development.
Zaleśny, Robert; Baranowska-Łączkowska, Angelika; Medveď, Miroslav; Luis, Josep M
2015-09-08
In the present work, we perform an assessment of several property-oriented atomic basis sets in computing (hyper)polarizabilities with a focus on the vibrational contributions. Our analysis encompasses the Pol and LPol-ds basis sets of Sadlej and co-workers, the def2-SVPD and def2-TZVPD basis sets of Rappoport and Furche, and the ORP basis set of Baranowska-Łączkowska and Łączkowski. Additionally, we use the d-aug-cc-pVQZ and aug-cc-pVTZ basis sets of Dunning and co-workers to determine the reference estimates of the investigated electric properties for small- and medium-sized molecules, respectively. We combine these basis sets with ab initio post-Hartree-Fock quantum-chemistry approaches (including the coupled cluster method) to calculate electronic and nuclear relaxation (hyper)polarizabilities of carbon dioxide, formaldehyde, cis-diazene, and a medium-sized Schiff base. The primary finding of our study is that, among all studied property-oriented basis sets, only the def2-TZVPD and ORP basis sets yield nuclear relaxation (hyper)polarizabilities of small molecules with average absolute errors less than 5.5%. A similar accuracy for the nuclear relaxation (hyper)polarizabilites of the studied systems can also be reached using the aug-cc-pVDZ basis set (5.3%), although for more accurate calculations of vibrational contributions, i.e., average absolute errors less than 1%, the aug-cc-pVTZ basis set is recommended. It was also demonstrated that anharmonic contributions to first and second hyperpolarizabilities of a medium-sized Schiff base are particularly difficult to accurately predict at the correlated level using property-oriented basis sets. For instance, the value of the nuclear relaxation first hyperpolarizability computed at the MP2/def2-TZVPD level of theory is roughly 3 times larger than that determined using the aug-cc-pVTZ basis set. We link the failure of the def2-TZVPD basis set with the difficulties in predicting the first-order field
3D-CT vascular setting protocol using computer graphics for the evaluation of maxillofacial lesions
Directory of Open Access Journals (Sweden)
CAVALCANTI Marcelo de Gusmão Paraiso
2001-01-01
Full Text Available In this paper we present the aspect of a mandibular giant cell granuloma in spiral computed tomography-based three-dimensional (3D-CT reconstructed images using computer graphics, and demonstrate the importance of the vascular protocol in permitting better diagnosis, visualization and determination of the dimensions of the lesion. We analyzed 21 patients with maxillofacial lesions of neoplastic and proliferative origins. Two oral and maxillofacial radiologists analyzed the images. The usefulness of interactive 3D images reconstructed by means of computer graphics, especially using a vascular setting protocol for qualitative and quantitative analyses for the diagnosis, determination of the extent of lesions, treatment planning and follow-up, was demonstrated. The technique is an important adjunct to the evaluation of lesions in relation to axial CT slices and 3D-CT bone images.
3D-CT vascular setting protocol using computer graphics for the evaluation of maxillofacial lesions.
Cavalcanti, M G; Ruprecht, A; Vannier, M W
2001-01-01
In this paper we present the aspect of a mandibular giant cell granuloma in spiral computed tomography-based three-dimensional (3D-CT) reconstructed images using computer graphics, and demonstrate the importance of the vascular protocol in permitting better diagnosis, visualization and determination of the dimensions of the lesion. We analyzed 21 patients with maxillofacial lesions of neoplastic and proliferative origins. Two oral and maxillofacial radiologists analyzed the images. The usefulness of interactive 3D images reconstructed by means of computer graphics, especially using a vascular setting protocol for qualitative and quantitative analyses for the diagnosis, determination of the extent of lesions, treatment planning and follow-up, was demonstrated. The technique is an important adjunct to the evaluation of lesions in relation to axial CT slices and 3D-CT bone images.
Evolving Non-Dominated Parameter Sets for Computational Models from Multiple Experiments
Lane, Peter C. R.; Gobet, Fernand
2013-03-01
Creating robust, reproducible and optimal computational models is a key challenge for theorists in many sciences. Psychology and cognitive science face particular challenges as large amounts of data are collected and many models are not amenable to analytical techniques for calculating parameter sets. Particular problems are to locate the full range of acceptable model parameters for a given dataset, and to confirm the consistency of model parameters across different datasets. Resolving these problems will provide a better understanding of the behaviour of computational models, and so support the development of general and robust models. In this article, we address these problems using evolutionary algorithms to develop parameters for computational models against multiple sets of experimental data; in particular, we propose the `speciated non-dominated sorting genetic algorithm' for evolving models in several theories. We discuss the problem of developing a model of categorisation using twenty-nine sets of data and models drawn from four different theories. We find that the evolutionary algorithms generate high quality models, adapted to provide a good fit to all available data.
International Nuclear Information System (INIS)
Ploeger, Lennert S.; Betgen, Anja; Gilhuijs, Kenneth G.A.; Herk, Marcel van
2003-01-01
Background and purpose: Body contours can potentially be used for patient set-up verification in external-beam radiotherapy and might enable more accurate set-up of patients prior to irradiation. The aim of this study is to test the feasibility of patient set-up verification using a body contour scanner. Material and methods: Body contour scans of 33 lung cancer and 21 head-and-neck cancer patients were acquired on a simulator. We assume that this dataset is representative for the patient set-up on an accelerator. Shortly before acquisition of the body contour scan, a pair of orthogonal simulator images was taken as a reference. Both the body contour scan and the simulator images were matched in 3D to the planning computed tomography scan. Movement of skin with respect to bone was quantified based on an analysis of variance method. Results: Set-up errors determined with body-contours agreed reasonably well with those determined with simulator images. For the lung cancer patients, the average set-up errors (mm)±1 standard deviation (SD) for the left-right, cranio-caudal and anterior-posterior directions were 1.2±2.9, -0.8±5.0 and -2.3±3.1 using body contours, compared to -0.8±3.2, -1.0±4.1 and -1.2±2.4 using simulator images. For the head-and-neck cancer patients, the set-up errors were 0.5±1.8, 0.5±2.7 and -2.2±1.8 using body contours compared to -0.4±1.2, 0.1±2.1, -0.1±1.8 using simulator images. The SD of the set-up errors obtained from analysis of the body contours were not significantly different from those obtained from analysis of the simulator images. Movement of the skin with respect to bone (1 SD) was estimated at 2.3 mm for lung cancer patients and 1.7 mm for head-and-neck cancer patients. Conclusion: Measurement of patient set-up using a body-contouring device is possible. The accuracy, however, is limited by the movement of the skin with respect to the bone. In situations where the error in the patient set-up is relatively large, it is
Accurate Computation of Periodic Regions' Centers in the General M-Set with Integer Index Number
Directory of Open Access Journals (Sweden)
Wang Xingyuan
2010-01-01
Full Text Available This paper presents two methods for accurately computing the periodic regions' centers. One method fits for the general M-sets with integer index number, the other fits for the general M-sets with negative integer index number. Both methods improve the precision of computation by transforming the polynomial equations which determine the periodic regions' centers. We primarily discuss the general M-sets with negative integer index, and analyze the relationship between the number of periodic regions' centers on the principal symmetric axis and in the principal symmetric interior. We can get the centers' coordinates with at least 48 significant digits after the decimal point in both real and imaginary parts by applying the Newton's method to the transformed polynomial equation which determine the periodic regions' centers. In this paper, we list some centers' coordinates of general M-sets' k-periodic regions (k=3,4,5,6 for the index numbers α=−25,−24,…,−1 , all of which have highly numerical accuracy.
Garde, Sebastian; Hovenga, Evelyn; Buck, Jasmin; Knaup, Petra
2006-01-01
Ubiquitous computing requires ubiquitous access to information and knowledge. With the release of openEHR Version 1.0 there is a common model available to solve some of the problems related to accessing information and knowledge by improving semantic interoperability between clinical systems. Considerable work has been undertaken by various bodies to standardise Clinical Data Sets. Notwithstanding their value, several problems remain unsolved with Clinical Data Sets without the use of a common model underpinning them. This paper outlines these problems like incompatible basic data types and overlapping and incompatible definitions of clinical content. A solution to this based on openEHR archetypes is motivated and an approach to transform existing Clinical Data Sets into archetypes is presented. To avoid significant overlaps and unnecessary effort during archetype development, archetype development needs to be coordinated nationwide and beyond and also across the various health professions in a formalized process.
Transportable GPU (General Processor Units) chip set technology for standard computer architectures
Fosdick, R. E.; Denison, H. C.
1982-11-01
The USAFR-developed GPU Chip Set has been utilized by Tracor to implement both USAF and Navy Standard 16-Bit Airborne Computer Architectures. Both configurations are currently being delivered into DOD full-scale development programs. Leadless Hermetic Chip Carrier packaging has facilitated implementation of both architectures on single 41/2 x 5 substrates. The CMOS and CMOS/SOS implementations of the GPU Chip Set have allowed both CPU implementations to use less than 3 watts of power each. Recent efforts by Tracor for USAF have included the definition of a next-generation GPU Chip Set that will retain the application-proven architecture of the current chip set while offering the added cost advantages of transportability across ISO-CMOS and CMOS/SOS processes and across numerous semiconductor manufacturers using a newly-defined set of common design rules. The Enhanced GPU Chip Set will increase speed by an approximate factor of 3 while significantly reducing chip counts and costs of standard CPU implementations.
Zhang, Yi-Qing; Cui, Jing; Zhang, Shu-Min; Zhang, Qi; Li, Xiang
2016-02-01
Modelling temporal networks of human face-to-face contacts is vital both for understanding the spread of airborne pathogens and word-of-mouth spreading of information. Although many efforts have been devoted to model these temporal networks, there are still two important social features, public activity and individual reachability, have been ignored in these models. Here we present a simple model that captures these two features and other typical properties of empirical face-to-face contact networks. The model describes agents which are characterized by an attractiveness to slow down the motion of nearby people, have event-triggered active probability and perform an activity-dependent biased random walk in a square box with periodic boundary. The model quantitatively reproduces two empirical temporal networks of human face-to-face contacts which are testified by their network properties and the epidemic spread dynamics on them.
Current-voltage curves for molecular junctions computed using all-electron basis sets
International Nuclear Information System (INIS)
Bauschlicher, Charles W.; Lawson, John W.
2006-01-01
We present current-voltage (I-V) curves computed using all-electron basis sets on the conducting molecule. The all-electron results are very similar to previous results obtained using effective core potentials (ECP). A hybrid integration scheme is used that keeps the all-electron calculations cost competitive with respect to the ECP calculations. By neglecting the coupling of states to the contacts below a fixed energy cutoff, the density matrix for the core electrons can be evaluated analytically. The full density matrix is formed by adding this core contribution to the valence part that is evaluated numerically. Expanding the definition of the core in the all-electron calculations significantly reduces the computational effort and, up to biases of about 2 V, the results are very similar to those obtained using more rigorous approaches. The convergence of the I-V curves and transmission coefficients with respect to basis set is discussed. The addition of diffuse functions is critical in approaching basis set completeness
Directory of Open Access Journals (Sweden)
Vincent BOUDET
2012-03-01
Full Text Available In this paper, we are interested in enhancing lifetime of wireless sensor networks trying to collect data from all the nodes to a “sink”-node for non-safety critical applications. Connected Dominating Sets are used as a basis for routing messages to the sink. We present a simple distributed algorithm, which computes several CDS trying to distribute the consumption of energy over all the nodes of the network. The simulations show a significant improvement of the network lifetime.
Rough set soft computing cancer classification and network: one stone, two birds.
Zhang, Yue
2010-07-15
Gene expression profiling provides tremendous information to help unravel the complexity of cancer. The selection of the most informative genes from huge noise for cancer classification has taken centre stage, along with predicting the function of such identified genes and the construction of direct gene regulatory networks at different system levels with a tuneable parameter. A new study by Wang and Gotoh described a novel Variable Precision Rough Sets-rooted robust soft computing method to successfully address these problems and has yielded some new insights. The significance of this progress and its perspectives will be discussed in this article.
Application of the level set method for multi-phase flow computation in fusion engineering
International Nuclear Information System (INIS)
Luo, X-Y.; Ni, M-J.; Ying, A.; Abdou, M.
2006-01-01
Numerical simulation of multi-phase flow is essential to evaluate the feasibility of a liquid protection scheme for the power plant chamber. The level set method is one of the best methods for computing and analyzing the motion of interface among the multi-phase flow. This paper presents a general formula for the second-order projection method combined with the level set method to simulate unsteady incompressible multi-phase flow with/out phase change flow encountered in fusion science and engineering. The third-order ENO scheme and second-order semi-implicit Crank-Nicholson scheme is used to update the convective and diffusion term. The numerical results show this method can handle the complex deformation of the interface and the effect of liquid-vapor phase change will be included in the future work
The CAIN computer code for the generation of MABEL input data sets: a user's manual
International Nuclear Information System (INIS)
Tilley, D.R.
1983-03-01
CAIN is an interactive FORTRAN computer code designed to overcome the substantial effort involved in manually creating the thermal-hydraulics input data required by MABEL-2. CAIN achieves this by processing output from either of the whole-core codes, RELAP or TRAC, interpolating where necessary, and by scanning RELAP/TRAC output in order to generate additional information. This user's manual describes the actions required in order to create RELAP/TRAC data sets from magnetic tape, to create the other input data sets required by CAIN, and to operate the interactive command procedure for the execution of CAIN. In addition, the CAIN code is described in detail. This programme of work is part of the Nuclear Installations Inspectorate (NII)'s contribution to the United Kingdom Atomic Energy Authority's independent safety assessment of pressurized water reactors. (author)
The use of computer simulations in whole-class versus small-group settings
Smetana, Lara Kathleen
This study explored the use of computer simulations in a whole-class as compared to small-group setting. Specific consideration was given to the nature and impact of classroom conversations and interactions when computer simulations were incorporated into a high school chemistry course. This investigation fills a need for qualitative research that focuses on the social dimensions of actual classrooms. Participants included a novice chemistry teacher experienced in the use of educational technologies and two honors chemistry classes. The study was conducted in a rural school in the south-Atlantic United States at the end of the fall 2007 semester. The study took place during one instructional unit on atomic structure. Data collection allowed for triangulation of evidence from a variety of sources approximately 24 hours of video- and audio-taped classroom observations, supplemented with the researcher's field notes and analytic journal; miscellaneous classroom artifacts such as class notes, worksheets, and assignments; open-ended pre- and post-assessments; student exit interviews; teacher entrance, exit and informal interviews. Four web-based simulations were used, three of which were from the ExploreLearning collection. Assessments were analyzed using descriptive statistics and classroom observations, artifacts and interviews were analyzed using Erickson's (1986) guidelines for analytic induction. Conversational analysis was guided by methods outlined by Erickson (1982). Findings indicated (a) the teacher effectively incorporated simulations in both settings (b) students in both groups significantly improved their understanding of the chemistry concepts (c) there was no statistically significant difference between groups' achievement (d) there was more frequent exploratory talk in the whole-class group (e) there were more frequent and meaningful teacher-student interactions in the whole-class group (f) additional learning experiences not measured on the assessment
International Nuclear Information System (INIS)
Mizuno, Takashi; Takahashi, Masaaki; Yoshioka, Katsunori
2008-01-01
It has been noted that the manual settings of region of interest (ROI) to the single-photon-emission-computed-tomography (SPECT) slice lacked objectivity when the fixed quantity value of regional cerebral blood flow (rCBF) was measured previously. Therefore, we jointly developed software Brain ROI' with Daiichi Radioisotope Laboratories, Ltd. (Present name: FUJIFILM RI Pharma Co., Ltd.) The software normalized an individual brain to a standard brain template by using Statistical Parametric Mapping 2 (SPM 2) of the easy Z-score Imaging System ver. 3.0 (eZIS Ver. 3.0), and the ROI template was set to a specific slice. In this study, we evaluated the accuracy of this software with an ROI template that we made of useful size and shape, in some clinical samples. The method of automatic setting of ROI was the objective. However, we felt that we should use the shape of the ROI template without the influence of brain atrophy. Moreover, we should see normalization of the individual brain and confirm the accuracy of normalization. When normalization failed, we should partially correct the ROI or set everything by manual operation for the operator. However, it was thought that this software was useful if the tendency was understood because examples of failure were few. (author)
A Computable Plug-In Estimator of Minimum Volume Sets for Novelty Detection
Park, Chiwoo
2010-10-01
A minimum volume set of a probability density is a region of minimum size among the regions covering a given probability mass of the density. Effective methods for finding the minimum volume sets are very useful for detecting failures or anomalies in commercial and security applications-a problem known as novelty detection. One theoretical approach of estimating the minimum volume set is to use a density level set where a kernel density estimator is plugged into the optimization problem that yields the appropriate level. Such a plug-in estimator is not of practical use because solving the corresponding minimization problem is usually intractable. A modified plug-in estimator was proposed by Hyndman in 1996 to overcome the computation difficulty of the theoretical approach but is not well studied in the literature. In this paper, we provide theoretical support to this estimator by showing its asymptotic consistency. We also show that this estimator is very competitive to other existing novelty detection methods through an extensive empirical study. ©2010 INFORMS.
A Computable Plug-In Estimator of Minimum Volume Sets for Novelty Detection
Park, Chiwoo; Huang, Jianhua Z.; Ding, Yu
2010-01-01
A minimum volume set of a probability density is a region of minimum size among the regions covering a given probability mass of the density. Effective methods for finding the minimum volume sets are very useful for detecting failures or anomalies in commercial and security applications-a problem known as novelty detection. One theoretical approach of estimating the minimum volume set is to use a density level set where a kernel density estimator is plugged into the optimization problem that yields the appropriate level. Such a plug-in estimator is not of practical use because solving the corresponding minimization problem is usually intractable. A modified plug-in estimator was proposed by Hyndman in 1996 to overcome the computation difficulty of the theoretical approach but is not well studied in the literature. In this paper, we provide theoretical support to this estimator by showing its asymptotic consistency. We also show that this estimator is very competitive to other existing novelty detection methods through an extensive empirical study. ©2010 INFORMS.
Directory of Open Access Journals (Sweden)
Huygens Flavia
2007-08-01
Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs and genes that exhibit presence/absence variation have provided informative marker sets for bacterial and viral genotyping. Identification of marker sets optimised for these purposes has been based on maximal generalized discriminatory power as measured by Simpson's Index of Diversity, or on the ability to identify specific variants. Here we describe the Not-N algorithm, which is designed to identify small sets of genetic markers diagnostic for user-specified subsets of known genetic variants. The algorithm does not treat the user-specified subset and the remaining genetic variants equally. Rather Not-N analysis is designed to underpin assays that provide 0% false negatives, which is very important for e.g. diagnostic procedures for clinically significant subgroups within microbial species. Results The Not-N algorithm has been incorporated into the "Minimum SNPs" computer program and used to derive genetic markers diagnostic for multilocus sequence typing-defined clonal complexes, hepatitis C virus (HCV subtypes, and phylogenetic clades defined by comparative genome hybridization (CGH data for Campylobacter jejuni, Yersinia enterocolitica and Clostridium difficile. Conclusion Not-N analysis is effective for identifying small sets of genetic markers diagnostic for microbial sub-groups. The best results to date have been obtained with CGH data from several bacterial species, and HCV sequence data.
Efficient frequent pattern mining algorithm based on node sets in cloud computing environment
Billa, V. N. Vinay Kumar; Lakshmanna, K.; Rajesh, K.; Reddy, M. Praveen Kumar; Nagaraja, G.; Sudheer, K.
2017-11-01
The ultimate goal of Data Mining is to determine the hidden information which is useful in making decisions using the large databases collected by an organization. This Data Mining involves many tasks that are to be performed during the process. Mining frequent itemsets is the one of the most important tasks in case of transactional databases. These transactional databases contain the data in very large scale where the mining of these databases involves the consumption of physical memory and time in proportion to the size of the database. A frequent pattern mining algorithm is said to be efficient only if it consumes less memory and time to mine the frequent itemsets from the given large database. Having these points in mind in this thesis we proposed a system which mines frequent itemsets in an optimized way in terms of memory and time by using cloud computing as an important factor to make the process parallel and the application is provided as a service. A complete framework which uses a proven efficient algorithm called FIN algorithm. FIN algorithm works on Nodesets and POC (pre-order coding) tree. In order to evaluate the performance of the system we conduct the experiments to compare the efficiency of the same algorithm applied in a standalone manner and in cloud computing environment on a real time data set which is traffic accidents data set. The results show that the memory consumption and execution time taken for the process in the proposed system is much lesser than those of standalone system.
An efficient ERP-based brain-computer interface using random set presentation and face familiarity.
Directory of Open Access Journals (Sweden)
Seul-Ki Yeom
Full Text Available Event-related potential (ERP-based P300 spellers are commonly used in the field of brain-computer interfaces as an alternative channel of communication for people with severe neuro-muscular diseases. This study introduces a novel P300 based brain-computer interface (BCI stimulus paradigm using a random set presentation pattern and exploiting the effects of face familiarity. The effect of face familiarity is widely studied in the cognitive neurosciences and has recently been addressed for the purpose of BCI. In this study we compare P300-based BCI performances of a conventional row-column (RC-based paradigm with our approach that combines a random set presentation paradigm with (non- self-face stimuli. Our experimental results indicate stronger deflections of the ERPs in response to face stimuli, which are further enhanced when using the self-face images, and thereby improving P300-based spelling performance. This lead to a significant reduction of stimulus sequences required for correct character classification. These findings demonstrate a promising new approach for improving the speed and thus fluency of BCI-enhanced communication with the widely used P300-based BCI setup.
Automatic Generation of Minimal Cut Sets
Directory of Open Access Journals (Sweden)
Sentot Kromodimoeljo
2015-06-01
Full Text Available A cut set is a collection of component failure modes that could lead to a system failure. Cut Set Analysis (CSA is applied to critical systems to identify and rank system vulnerabilities at design time. Model checking tools have been used to automate the generation of minimal cut sets but are generally based on checking reachability of system failure states. This paper describes a new approach to CSA using a Linear Temporal Logic (LTL model checker called BT Analyser that supports the generation of multiple counterexamples. The approach enables a broader class of system failures to be analysed, by generalising from failure state formulae to failure behaviours expressed in LTL. The traditional approach to CSA using model checking requires the model or system failure to be modified, usually by hand, to eliminate already-discovered cut sets, and the model checker to be rerun, at each step. By contrast, the new approach works incrementally and fully automatically, thereby removing the tedious and error-prone manual process and resulting in significantly reduced computation time. This in turn enables larger models to be checked. Two different strategies for using BT Analyser for CSA are presented. There is generally no single best strategy for model checking: their relative efficiency depends on the model and property being analysed. Comparative results are given for the A320 hydraulics case study in the Behavior Tree modelling language.
Expressing clinical data sets with openEHR archetypes: a solid basis for ubiquitous computing.
Garde, Sebastian; Hovenga, Evelyn; Buck, Jasmin; Knaup, Petra
2007-12-01
The purpose of this paper is to analyse the feasibility and usefulness of expressing clinical data sets (CDSs) as openEHR archetypes. For this, we present an approach to transform CDS into archetypes, and outline typical problems with CDS and analyse whether some of these problems can be overcome by the use of archetypes. Literature review and analysis of a selection of existing Australian, German, other European and international CDSs; transfer of a CDS for Paediatric Oncology into openEHR archetypes; implementation of CDSs in application systems. To explore the feasibility of expressing CDS as archetypes an approach to transform existing CDSs into archetypes is presented in this paper. In case of the Paediatric Oncology CDS (which consists of 260 data items) this lead to the definition of 48 openEHR archetypes. To analyse the usefulness of expressing CDS as archetypes, we identified nine problems with CDS that currently remain unsolved without a common model underpinning the CDS. Typical problems include incompatible basic data types and overlapping and incompatible definitions of clinical content. A solution to most of these problems based on openEHR archetypes is motivated. With regard to integrity constraints, further research is required. While openEHR cannot overcome all barriers to Ubiquitous Computing, it can provide the common basis for ubiquitous presence of meaningful and computer-processable knowledge and information, which we believe is a basic requirement for Ubiquitous Computing. Expressing CDSs as openEHR archetypes is feasible and advantageous as it fosters semantic interoperability, supports ubiquitous computing, and helps to develop archetypes that are arguably of better quality than the original CDS.
Fast Computation of Categorical Richness on Raster Data Sets and Related Problems
DEFF Research Database (Denmark)
de Berg, Mark; Tsirogiannis, Constantinos; Wilkinson, Bryan
2015-01-01
that runs in O(n) time and one for circular windows that runs in O((1+K/r)n) time, where K is the number of different categories appearing in G. The algorithms are not only very efficient in theory, but also in practice: our experiments show that our algorithms can handle raster data sets of hundreds...... of millions of cells. The categorical richness problem is related to colored range counting, where the goal is to preprocess a colored point set such that we can efficiently count the number of colors appearing inside a query range. We present a data structure for colored range counting in R^2 for the case......In many scientific fields, it is common to encounter raster data sets consisting of categorical data, such as soil type or land usage of a terrain. A problem that arises in the presence of such data is the following: given a raster G of n cells storing categorical data, compute for every cell c...
Nguyen, Hung T.; Kreinovich, Vladik
2014-01-01
To help computers make better decisions, it is desirable to describe all our knowledge in computer-understandable terms. This is easy for knowledge described in terms on numerical values: we simply store the corresponding numbers in the computer. This is also easy for knowledge about precise (well-defined) properties which are either true or false for each object: we simply store the corresponding “true” and “false” values in the computer. The challenge is how to store information about imprecise properties. In this paper, we overview different ways to fully store the expert information about imprecise properties. We show that in the simplest case, when the only source of imprecision is disagreement between different experts, a natural way to store all the expert information is to use random sets; we also show how fuzzy sets naturally appear in such random-set representation. We then show how the random-set representation can be extended to the general (“fuzzy”) case when, in addition to disagreements, experts are also unsure whether some objects satisfy certain properties or not. PMID:25386045
EVOLVE : a Bridge between Probability, Set Oriented Numerics, and Evolutionary Computation II
Coello, Carlos; Tantar, Alexandru-Adrian; Tantar, Emilia; Bouvry, Pascal; Moral, Pierre; Legrand, Pierrick; EVOLVE 2012
2013-01-01
This book comprises a selection of papers from the EVOLVE 2012 held in Mexico City, Mexico. The aim of the EVOLVE is to build a bridge between probability, set oriented numerics and evolutionary computing, as to identify new common and challenging research aspects. The conference is also intended to foster a growing interest for robust and efficient methods with a sound theoretical background. EVOLVE is intended to unify theory-inspired methods and cutting-edge techniques ensuring performance guarantee factors. By gathering researchers with different backgrounds, a unified view and vocabulary can emerge where the theoretical advancements may echo in different domains. Summarizing, the EVOLVE focuses on challenging aspects arising at the passage from theory to new paradigms and aims to provide a unified view while raising questions related to reliability, performance guarantees and modeling. The papers of the EVOLVE 2012 make a contribution to this goal.
Chen, Yi-Ting; Horng, Mong-Fong; Lo, Chih-Cheng; Chu, Shu-Chuan; Pan, Jeng-Shyang; Liao, Bin-Yih
2013-03-20
Transmission power optimization is the most significant factor in prolonging the lifetime and maintaining the connection quality of wireless sensor networks. Un-optimized transmission power of nodes either interferes with or fails to link neighboring nodes. The optimization of transmission power depends on the expected node degree and node distribution. In this study, an optimization approach to an energy-efficient and full reachability wireless sensor network is proposed. In the proposed approach, an adjustment model of the transmission range with a minimum node degree is proposed that focuses on topology control and optimization of the transmission range according to node degree and node density. The model adjusts the tradeoff between energy efficiency and full reachability to obtain an ideal transmission range. In addition, connectivity and reachability are used as performance indices to evaluate the connection quality of a network. The two indices are compared to demonstrate the practicability of framework through simulation results. Furthermore, the relationship between the indices under the conditions of various node degrees is analyzed to generalize the characteristics of node densities. The research results on the reliability and feasibility of the proposed approach will benefit the future real deployments.
Chen, Yi-Ting; Horng, Mong-Fong; Lo, Chih-Cheng; Chu, Shu-Chuan; Pan, Jeng-Shyang; Liao, Bin-Yih
2013-01-01
Transmission power optimization is the most significant factor in prolonging the lifetime and maintaining the connection quality of wireless sensor networks. Un-optimized transmission power of nodes either interferes with or fails to link neighboring nodes. The optimization of transmission power depends on the expected node degree and node distribution. In this study, an optimization approach to an energy-efficient and full reachability wireless sensor network is proposed. In the proposed approach, an adjustment model of the transmission range with a minimum node degree is proposed that focuses on topology control and optimization of the transmission range according to node degree and node density. The model adjusts the tradeoff between energy efficiency and full reachability to obtain an ideal transmission range. In addition, connectivity and reachability are used as performance indices to evaluate the connection quality of a network. The two indices are compared to demonstrate the practicability of framework through simulation results. Furthermore, the relationship between the indices under the conditions of various node degrees is analyzed to generalize the characteristics of node densities. The research results on the reliability and feasibility of the proposed approach will benefit the future real deployments. PMID:23519351
Reachability Does Not Explain the Middle Preference: A Comment on Bar-Hillel (2015
Directory of Open Access Journals (Sweden)
Paul Rodway
2016-03-01
Full Text Available Choosing an object from an array of similar objects is a task that people complete frequently throughout their lives (e.g., choosing a can of soup from many cans of soup. Research has also demonstrated that items in the middle of an array or scene are looked at more often and are more likely to be chosen. This middle preference is surprisingly robust and widespread, having been found in a wide range of perceptual-motor tasks. In a recent review of the literature, Bar-Hillel (2015 proposes, among other things, that the middle preference is largely explained by the middle item being easier to reach, either physically or mentally. We specifically evaluate Bar-Hillel’s reachability explanation for choice in non-interactive situations in light of evidence showing an effect of item valence on such choices. This leads us to conclude that the center-stage heuristic account is a more plausible explanation of the middle preference.
Lippert, Christoph; Xiang, Jing; Horta, Danilo; Widmer, Christian; Kadie, Carl; Heckerman, David; Listgarten, Jennifer
2014-11-15
Set-based variance component tests have been identified as a way to increase power in association studies by aggregating weak individual effects. However, the choice of test statistic has been largely ignored even though it may play an important role in obtaining optimal power. We compared a standard statistical test-a score test-with a recently developed likelihood ratio (LR) test. Further, when correction for hidden structure is needed, or gene-gene interactions are sought, state-of-the art algorithms for both the score and LR tests can be computationally impractical. Thus we develop new computationally efficient methods. After reviewing theoretical differences in performance between the score and LR tests, we find empirically on real data that the LR test generally has more power. In particular, on 15 of 17 real datasets, the LR test yielded at least as many associations as the score test-up to 23 more associations-whereas the score test yielded at most one more association than the LR test in the two remaining datasets. On synthetic data, we find that the LR test yielded up to 12% more associations, consistent with our results on real data, but also observe a regime of extremely small signal where the score test yielded up to 25% more associations than the LR test, consistent with theory. Finally, our computational speedups now enable (i) efficient LR testing when the background kernel is full rank, and (ii) efficient score testing when the background kernel changes with each test, as for gene-gene interaction tests. The latter yielded a factor of 2000 speedup on a cohort of size 13 500. Software available at http://research.microsoft.com/en-us/um/redmond/projects/MSCompBio/Fastlmm/. heckerma@microsoft.com Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.
A computational model for three-dimensional jointed media with a single joint set
International Nuclear Information System (INIS)
Koteras, J.R.
1994-02-01
This report describes a three-dimensional model for jointed rock or other media with a single set of joints. The joint set consists of evenly spaced joint planes. The normal joint response is nonlinear elastic and is based on a rational polynomial. Joint shear stress is treated as being linear elastic in the shear stress versus slip displacement before attaining a critical stress level governed by a Mohr-Coulomb faction criterion. The three-dimensional model represents an extension of a two-dimensional, multi-joint model that has been in use for several years. Although most of the concepts in the two-dimensional model translate in a straightforward manner to three dimensions, the concept of slip on the joint planes becomes more complex in three dimensions. While slip in two dimensions can be treated as a scalar quantity, it must be treated as a vector in the joint plane in three dimensions. For the three-dimensional model proposed here, the slip direction is assumed to be the direction of maximum principal strain in the joint plane. Five test problems are presented to verify the correctness of the computational implementation of the model
A Proposal to Speed up the Computation of the Centroid of an Interval Type-2 Fuzzy Set
Directory of Open Access Journals (Sweden)
Carlos E. Celemin
2013-01-01
Full Text Available This paper presents two new algorithms that speed up the centroid computation of an interval type-2 fuzzy set. The algorithms include precomputation of the main operations and initialization based on the concept of uncertainty bounds. Simulations over different kinds of footprints of uncertainty reveal that the new algorithms achieve computation time reductions with respect to the Enhanced-Karnik algorithm, ranging from 40 to 70%. The results suggest that the initialization used in the new algorithms effectively reduces the number of iterations to compute the extreme points of the interval centroid while precomputation reduces the computational cost of each iteration.
Toward accurate tooth segmentation from computed tomography images using a hybrid level set model
Energy Technology Data Exchange (ETDEWEB)
Gan, Yangzhou; Zhao, Qunfei [Department of Automation, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240 (China); Xia, Zeyang, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn; Hu, Ying [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and The Chinese University of Hong Kong, Shenzhen 518055 (China); Xiong, Jing, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 510855 (China); Zhang, Jianwei [TAMS, Department of Informatics, University of Hamburg, Hamburg 22527 (Germany)
2015-01-15
Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0
Toward accurate tooth segmentation from computed tomography images using a hybrid level set model
International Nuclear Information System (INIS)
Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang; Hu, Ying; Xiong, Jing; Zhang, Jianwei
2015-01-01
Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm 3 ) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm 3 , 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm 3 , 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0.28 ± 0.03 mm
Energy Technology Data Exchange (ETDEWEB)
Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami
2017-03-27
Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Results: Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs.
Vangsnes, Vigdis; Gram Okland, Nils Tore; Krumsvik, Rune
2012-01-01
This article focuses on the didactical implications when commercial educational computer games are used in Norwegian kindergartens by analysing the dramaturgy and the didactics of one particular game and the game in use in a pedagogical context. Our justification for analysing the game by using dramaturgic theory is that we consider the game to be…
International Nuclear Information System (INIS)
Sidell, J.
1976-08-01
EXTRA is a program written for the Winfrith KDF9 enabling the user to solve first order initial value differential equations. In this report general numerical integration methods are discussed with emphasis on their application to the solution of stiff sets of equations. A method of particular applicability to stiff sets of equations is described. This method is incorporated in the program EXTRA and full instructions for its use are given. A comparison with other methods of computation is included. (author)
Directory of Open Access Journals (Sweden)
Eduardo Mireles-Cabodevila
2012-01-01
Full Text Available Background. There are modes of mechanical ventilation that can select ventilator settings with computer controlled algorithms (targeting schemes. Two examples are adaptive support ventilation (ASV and mid-frequency ventilation (MFV. We studied how different clinician-chosen ventilator settings are from these computer algorithms under different scenarios. Methods. A survey of critical care clinicians provided reference ventilator settings for a 70 kg paralyzed patient in five clinical/physiological scenarios. The survey-derived values for minute ventilation and minute alveolar ventilation were used as goals for ASV and MFV, respectively. A lung simulator programmed with each scenario’s respiratory system characteristics was ventilated using the clinician, ASV, and MFV settings. Results. Tidal volumes ranged from 6.1 to 8.3 mL/kg for the clinician, 6.7 to 11.9 mL/kg for ASV, and 3.5 to 9.9 mL/kg for MFV. Inspiratory pressures were lower for ASV and MFV. Clinician-selected tidal volumes were similar to the ASV settings for all scenarios except for asthma, in which the tidal volumes were larger for ASV and MFV. MFV delivered the same alveolar minute ventilation with higher end expiratory and lower end inspiratory volumes. Conclusions. There are differences and similarities among initial ventilator settings selected by humans and computers for various clinical scenarios. The ventilation outcomes are the result of the lung physiological characteristics and their interaction with the targeting scheme.
Gong, Jing; Liu, Ji-Yu; Sun, Xi-Wen; Zheng, Bin; Nie, Sheng-Dong
2018-02-01
This study aims to develop a computer-aided diagnosis (CADx) scheme for classification between malignant and benign lung nodules, and also assess whether CADx performance changes in detecting nodules associated with early and advanced stage lung cancer. The study involves 243 biopsy-confirmed pulmonary nodules. Among them, 76 are benign, 81 are stage I and 86 are stage III malignant nodules. The cases are separated into three data sets involving: (1) all nodules, (2) benign and stage I malignant nodules, and (3) benign and stage III malignant nodules. A CADx scheme is applied to segment lung nodules depicted on computed tomography images and we initially computed 66 3D image features. Then, three machine learning models namely, a support vector machine, naïve Bayes classifier and linear discriminant analysis, are separately trained and tested by using three data sets and a leave-one-case-out cross-validation method embedded with a Relief-F feature selection algorithm. When separately using three data sets to train and test three classifiers, the average areas under receiver operating characteristic curves (AUC) are 0.94, 0.90 and 0.99, respectively. When using the classifiers trained using data sets with all nodules, average AUC values are 0.88 and 0.99 for detecting early and advanced stage nodules, respectively. AUC values computed from three classifiers trained using the same data set are consistent without statistically significant difference (p > 0.05). This study demonstrates (1) the feasibility of applying a CADx scheme to accurately distinguish between benign and malignant lung nodules, and (2) a positive trend between CADx performance and cancer progression stage. Thus, in order to increase CADx performance in detecting subtle and early cancer, training data sets should include more diverse early stage cancer cases.
Computer simulation of the behaviour of Julia sets using switching processes
Energy Technology Data Exchange (ETDEWEB)
Negi, Ashish [Department of Computer Science and Engineering, G.B. Pant Engineering College, Pauri Garhwal 246001 (India)], E-mail: ashish_ne@yahoo.com; Rani, Mamta [Department of Computer Science, Galgotia College of Engineering and Technology, UP Technical University, Knowledge Park-II, Greater Noida, Gautam Buddha Nagar, UP (India)], E-mail: vedicmri@sancharnet.in; Mahanti, P.K. [Department of CSAS, University of New Brunswick, Saint Johhn, New Brunswick, E2L4L5 (Canada)], E-mail: pmahanti@unbsj.ca
2008-08-15
Inspired by the study of Julia sets using switched processes by Lakhtakia and generation of new fractals by composite functions by Shirriff, we study the effect of switched processes on superior Julia sets given by Rani and Kumar. Further, symmetry for such processes is also discussed in the paper.
Computer simulation of the behaviour of Julia sets using switching processes
International Nuclear Information System (INIS)
Negi, Ashish; Rani, Mamta; Mahanti, P.K.
2008-01-01
Inspired by the study of Julia sets using switched processes by Lakhtakia and generation of new fractals by composite functions by Shirriff, we study the effect of switched processes on superior Julia sets given by Rani and Kumar. Further, symmetry for such processes is also discussed in the paper
Fast parallel DNA-based algorithms for molecular computation: the set-partition problem.
Chang, Weng-Long
2007-12-01
This paper demonstrates that basic biological operations can be used to solve the set-partition problem. In order to achieve this, we propose three DNA-based algorithms, a signed parallel adder, a signed parallel subtractor and a signed parallel comparator, that formally verify our designed molecular solutions for solving the set-partition problem.
Computer programmes development for environment variables setting for use with C compiler
International Nuclear Information System (INIS)
Sriyotha, P.; Prasertchiewcharn, N.; Yamkate, P.
1994-01-01
Compilers generally need special environment variables that operating system has not given at the beginning. Such an environment variables as COMSPEC, PATH, TMP, LIB, INCLUDE can be used as data exchange among programmes.Those variables normally occupy memories and, in some cases, an 'Out of Environment Space' error message frequently occurs when the user set a new environment variable. We would hate to give up in such a situation that just one variable has gotten too large as well as destroying all environment variables. In order to bring everything down to an earth, we try to save an old environment setting, clear and set a new one. Later on, a new setting shall have been cleared and an old one from a saved setting shall have been restored
Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami
2017-08-01
Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs. The software is implemented in Matlab, and is provided as supplementary information . hyunseob.song@pnnl.gov. Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2017. This work is written by US Government employees and are in the public domain in the US.
Matched molecular pair-based data sets for computer-aided medicinal chemistry
Bajorath, Jürgen
2014-01-01
Matched molecular pairs (MMPs) are widely used in medicinal chemistry to study changes in compound properties including biological activity, which are associated with well-defined structural modifications. Herein we describe up-to-date versions of three MMP-based data sets that have originated from in-house research projects. These data sets include activity cliffs, structure-activity relationship (SAR) transfer series, and second generation MMPs based upon retrosynthetic rules. The data sets have in common that they have been derived from compounds included in the ChEMBL database (release 17) for which high-confidence activity data are available. Thus, the activity data associated with MMP-based activity cliffs, SAR transfer series, and retrosynthetic MMPs cover the entire spectrum of current pharmaceutical targets. Our data sets are made freely available to the scientific community. PMID:24627802
International Nuclear Information System (INIS)
Hopkinson, A.
1969-05-01
The techniques normally used for linearisation of equations are not amenable to general treatment by digital computation. This report describes a computer program for linearising sets of equations by numerical evaluations of partial derivatives. The program is written so that the specification of the non-linear equations is the same as for the digital simulation program, FIFI, and the linearised equations can be punched out in form suitable for input to the frequency response program FRP2 and the poles and zeros program ZIP. Full instructions for the use of the program are given and a sample problem input and output are shown. (author)
Rediscovering the Economics of Keynes in an Agent-Based Computational Setting
DEFF Research Database (Denmark)
Bruun, Charlotte
2016-01-01
The aim of this paper is to use agent-based computational economics to explore the economic thinking of Keynes. Taking his starting point at the macroeconomic level, Keynes argued that economic systems are characterized by fundamental uncertainty — an uncertainty that makes rule-based behavior...
Bekooij, Marco; Bekooij, Marco Jan Gerrit; Wiggers, M.H.; van Meerbergen, Jef
2007-01-01
Soft real-time applications that process data streams can often be intuitively described as dataflow process networks. In this paper we present a novel analysis technique to compute conservative estimates of the required buffer capacities in such process networks. With the same analysis technique
Face-to-face versus computer-mediated communication in a primary school setting
Meijden, H.A.T. van der; Veenman, S.A.M.
2005-01-01
Computer-mediated communication is increasingly being used to support cooperative problem solving and decision making in schools. Despite the large body of literature on cooperative or collaborative learning, few studies have explicitly compared peer learning in face-to-face (FTF) versus
The effectiveness of remedial computer use for mathematics in a university setting (Botswana)
Plomp, T.; Pilon, J.; Pilon, Jacqueline; Janssen Reinen, I.A.M.
1991-01-01
This paper describes the evaluation of the effects of the Mathematics and Science Computer Assisted Remedial Teaching (MASCART) software on students from the Pre-Entry Science Course at the University of Botswana. A general significant improvement of basic algebra knowledge and skills could be
2013-01-08
..., 24 and 28-30 of U.S. Patent No. 7,155,598. The notice of institution named as respondent Apple Inc., a/k/a Apple Computer, Inc. of Cupertino, California (``Apple''). On November 19, 2012, VIA and Apple... on behalf of VIA Technologies, Inc. of New Taipei City, Taiwan; IP-First, LLC of Fremont, California...
Tharpe, Leonard.
1992-01-01
Approved for public release; distribution is unlimited This thesis presents a simulation and analysis of the Reduced Instruction Set Computer (RISC) architecture and the effects on RISC performance of a lockup-free cache interface. RISC architectures achieve high performance by having a small, but sufficient, instruction set with most instructions executing in one clock cycle. Current RISC performance range from 1.5 to 2.0 CPI. The goal of RISC is to attain a CPI of 1.0. The major hind...
International Nuclear Information System (INIS)
Gerdt, Vladimir P.; Severyanov, Vasily M.
2006-01-01
A C package is presented that allows a user for an input quantum circuit to generate a set of multivariate polynomials over the finite field Z 2 whose total number of solutions in Z 2 determines the output of the quantum computation defined by the circuit. The generated polynomial system can further be converted to the canonical Grobner basis form which provides a universal algorithmic tool for counting the number of common roots of the polynomials
Rediscovering the Economics of Keynes in an Agent-Based Computational Setting
DEFF Research Database (Denmark)
Bruun, Charlotte
The aim of this paper is to use agent-based computational economics to explore the economic thinking of Keynes. Taking his starting point at the macroeconomic level, Keynes argued that economic systems are characterized by fundamental uncertainty - an uncertainty that makes rule-based behaviour...... and reliance on monetary magnitudes more optimal to the economic agent than profit- and utility optimazation in the traditional sense. Unfortunately more systematic studies of the properties of such a system was not possible at the time of Keynes. The system envisioned by Keynes holds a lot of properties...... in commen with what we today call complex dynamic systems, and today we may aply the method of agent-based computational economics to the ideas of Keynes. The presented agent-based Keynesian model demonstrate, as argued by Keynes, that the economy can selforganize without relying on price movement...
Crowe, K.; Cumming, T.; McCormack, J.; McLeod, S.; Baker, E.; Wren, Y.; Roulstone, S.; Masso, S.
2017-01-01
Early childhood educators are frequently called on to support preschool-aged children with speech sound disorders and to engage these children in activities that target their speech production. This study explored factors that acted as facilitators and/or barriers to the provision of computer-based support for children with speech sound disorders (SSD) in early childhood centres. Participants were 23 early childhood educators at 13 centres who participated in the Sound Start Study, a randomiz...
Computing half-plane and strip discrepancy of planar point sets
Berg, de M.
1996-01-01
We present efficient algorithms for two problems concerning the discrepancy of a set S of n points in the unit square in the plane. First, we describe an algorithm for maintaining the half-plane discrepancy of S under insertions and deletions of points. The algorithm runs in O(nlogn) worst-case time
Ochterski, Joseph W.
2014-01-01
This article describes the results of using state-of-the-art, research-quality software as a learning tool in a general chemistry secondary school classroom setting. I present three activities designed to introduce fundamental chemical concepts regarding molecular shape and atomic orbitals to students with little background in chemistry, such as…
Expectations, Realizations, and Approval of Tablet Computers in an Educational Setting
Hassan, Mamdouh; Geys, Benny
2016-01-01
The introduction of new technologies in classrooms is often thought to offer great potential for advancing learning. In this article, we investigate the relationship between such expectations and the post-implementation evaluation of a new technology in an educational setting. Building on psychological research, we argue that (1) high expectations…
The processing cost of reference-set computation: guess patterns in acquisition
Reinhart, T.
1999-01-01
An idea which got much attention in linguistic theory in the nineties is that the well-formedness of sentences is not always determined by absolute conditions, but it may be determined by a selection of the optimal competitor out of a relevant reference-set. A restricted version of this was assumed
Energy Technology Data Exchange (ETDEWEB)
Duflot, Nicolas [Universite de technologie de Troyes, Institut Charles Delaunay/LM2S, FRE CNRS 2848, 12, rue Marie Curie, BP2060, F-10010 Troyes cedex (France)], E-mail: nicolas.duflot@areva.com; Berenguer, Christophe [Universite de technologie de Troyes, Institut Charles Delaunay/LM2S, FRE CNRS 2848, 12, rue Marie Curie, BP2060, F-10010 Troyes cedex (France)], E-mail: christophe.berenguer@utt.fr; Dieulle, Laurence [Universite de technologie de Troyes, Institut Charles Delaunay/LM2S, FRE CNRS 2848, 12, rue Marie Curie, BP2060, F-10010 Troyes cedex (France)], E-mail: laurence.dieulle@utt.fr; Vasseur, Dominique [EPSNA Group (Nuclear PSA and Application), EDF Research and Development, 1, avenue du Gal de Gaulle, 92141 Clamart cedex (France)], E-mail: dominique.vasseur@edf.fr
2009-11-15
A truncation process aims to determine among the set of minimal cut-sets (MCS) produced by a probabilistic safety assessment (PSA) model which of them are significant. Several truncation processes have been proposed for the evaluation of the probability of core damage ensuring a fixed accuracy level. However, the evaluation of new risk indicators as importance measures requires to re-examine the truncation process in order to ensure that the produced estimates will be accurate enough. In this paper a new truncation process is developed permitting to estimate from a single set of MCS the importance measure of any basic event with the desired accuracy level. The main contribution of this new method is to propose an MCS-wise truncation criterion involving two thresholds: an absolute threshold in addition to a new relative threshold concerning the potential probability of the MCS of interest. The method has been tested on a complete level 1 PSA model of a 900 MWe NPP developed by 'Electricite de France' (EDF) and the results presented in this paper indicate that to reach the same accuracy level the proposed method produces a set of MCS whose size is significantly reduced.
International Nuclear Information System (INIS)
Duflot, Nicolas; Berenguer, Christophe; Dieulle, Laurence; Vasseur, Dominique
2009-01-01
A truncation process aims to determine among the set of minimal cut-sets (MCS) produced by a probabilistic safety assessment (PSA) model which of them are significant. Several truncation processes have been proposed for the evaluation of the probability of core damage ensuring a fixed accuracy level. However, the evaluation of new risk indicators as importance measures requires to re-examine the truncation process in order to ensure that the produced estimates will be accurate enough. In this paper a new truncation process is developed permitting to estimate from a single set of MCS the importance measure of any basic event with the desired accuracy level. The main contribution of this new method is to propose an MCS-wise truncation criterion involving two thresholds: an absolute threshold in addition to a new relative threshold concerning the potential probability of the MCS of interest. The method has been tested on a complete level 1 PSA model of a 900 MWe NPP developed by 'Electricite de France' (EDF) and the results presented in this paper indicate that to reach the same accuracy level the proposed method produces a set of MCS whose size is significantly reduced.
Learning from Friends: Measuring Influence in a Dyadic Computer Instructional Setting
DeLay, Dawn; Hartl, Amy C.; Laursen, Brett; Denner, Jill; Werner, Linda; Campe, Shannon; Ortiz, Eloy
2014-01-01
Data collected from partners in a dyadic instructional setting are, by definition, not statistically independent. As a consequence, conventional parametric statistical analyses of change and influence carry considerable risk of bias. In this article, we illustrate a strategy to overcome this obstacle: the longitudinal actor-partner interdependence…
Computer-assisted mammography in clinical practice: Another set of problems to solve
International Nuclear Information System (INIS)
Gale, A.G.; Roebuck, E.J.; Worthington, B.S.
1986-01-01
To be adopted in radiological practice, computer-assisted diagnosis must address a domain of realistic complexity and have a high performance in terms of speed and reliability. Use of a microcomputer-based system of mammographic diagnoses employing discriminant function analysis resulted in significantly fewer false-positive diagnoses while producing a similar level of correct diagnoses of cancer as normal reporting. Although such a system is a valuable teaching aid, its clinical use is constrained by the problems of unambiguously codifying descriptors, data entry time, and the tendency of radiologists to override predicted diagnoses which conflict with their own
A Method of Forming the Optimal Set of Disjoint Path in Computer Networks
Directory of Open Access Journals (Sweden)
As'ad Mahmoud As'ad ALNASER
2017-04-01
Full Text Available This work provides a short analysis of algorithms of multipath routing. The modified algorithm of formation of the maximum set of not crossed paths taking into account their metrics is offered. Optimization of paths is carried out due to their reconfiguration with adjacent deadlock path. Reconfigurations are realized within the subgraphs including only peaks of the main and an adjacent deadlock path. It allows to reduce the field of formation of an optimum path and time complexity of its formation.
Laadhari , Aymen; Saramito , Pierre; Misbah , Chaouqi
2014-01-01
International audience; The numerical simulation of the deformation of vesicle membranes under simple shear external fluid flow is considered in this paper. A new saddle-point approach is proposed for the imposition of the fluid incompressibility and the membrane inextensibility constraints, through Lagrange multipliers defined in the fluid and on the membrane respectively. Using a level set formulation, the problem is approximated by mixed finite elements combined with an automatic adaptive ...
An algorithm for computing the hull of the solution set of interval linear equations
Czech Academy of Sciences Publication Activity Database
Rohn, Jiří
2011-01-01
Roč. 435, č. 2 (2011), s. 193-201 ISSN 0024-3795 R&D Projects: GA ČR GA201/09/1957; GA ČR GC201/08/J020 Institutional research plan: CEZ:AV0Z10300504 Keywords : interval linear equations * solution set * interval hull * algorithm * absolute value inequality Subject RIV: BA - General Mathematics Impact factor: 0.974, year: 2011
Cai, Wenli; Yoshida, Hiroyuki; Harris, Gordon J.
2007-03-01
Measurement of the volume of focal liver tumors, called liver tumor volumetry, is indispensable for assessing the growth of tumors and for monitoring the response of tumors to oncology treatments. Traditional edge models, such as the maximum gradient and zero-crossing methods, often fail to detect the accurate boundary of a fuzzy object such as a liver tumor. As a result, the computerized volumetry based on these edge models tends to differ from manual segmentation results performed by physicians. In this study, we developed a novel computerized volumetry method for fuzzy objects, called dynamic-thresholding level set (DT level set). An optimal threshold value computed from a histogram tends to shift, relative to the theoretical threshold value obtained from a normal distribution model, toward a smaller region in the histogram. We thus designed a mobile shell structure, called a propagating shell, which is a thick region encompassing the level set front. The optimal threshold calculated from the histogram of the shell drives the level set front toward the boundary of a liver tumor. When the volume ratio between the object and the background in the shell approaches one, the optimal threshold value best fits the theoretical threshold value and the shell stops propagating. Application of the DT level set to 26 hepatic CT cases with 63 biopsy-confirmed hepatocellular carcinomas (HCCs) and metastases showed that the computer measured volumes were highly correlated with those of tumors measured manually by physicians. Our preliminary results showed that DT level set was effective and accurate in estimating the volumes of liver tumors detected in hepatic CT images.
International Nuclear Information System (INIS)
Dang, Tan; Mandarano, Giovanni
2006-01-01
There have been many different opinions over the efficacy of routinely incorporating liver-window settings in abdominal computed tomography (CT) scans. As a result, different clinical centres have varying protocols for incorporating liver-windows for abdominal CT scans. This investigation aims to explore and determine whether various clinical centres throughout Victoria use liver-window settings selectively or routinely and their justification for doing so. An additional purpose is also to assess the benefits and rationale of liver-window settings in supplementing routine soft-tissue-windows for abdominal CT examinations by reviewing evidenced-based studies. Surveys were sent out to CT supervisors at various clinical centres, including private and public institutions. This achieved an overall response rate of 74 per cent. Results indicate that the majority of clinical centres throughout Victoria routinely incorporate liver-window settings for all abdominal CT examinations. Forty four per cent (11/25) of respondents stated that they utilise liver-window settings selectively for abdominal CT examinations. Most of these respondents (7/11 = 63 per cent) believed that soft-tissue-window settings alone are adequate to demonstrate hepatic lesions; particularly if intravenous contrast media is used and the liver is captured in the arterial, venous and/or delayed phases. The benefits and rationale of incorporating liver-window settings for all abdominal computed tomography scans has been questioned by two well noted studies in the United States. These evidence-based studies suggest that such additional settings do not offer further advantages in detecting hepatic disease, when compared to soft-tissue-windows alone. Review of the available literature provides additional evidence suggesting that the routine use of liver-window settings in conjunction with soft-tissue-windows offers no further advantages in the detection of hepatic diseases. This investigation found, however
Directory of Open Access Journals (Sweden)
Azad Ali
2016-05-01
Full Text Available The most common course delivery model is based on teacher (knowledge provider - student (knowledge receiver relationship. The most visible symptom of this situation is over-reliance on textbook’s tutorials. This traditional model of delivery reduces teacher flexibility, causes lack of interest among students, and often makes classes boring. Especially this is visible when teaching Computer Literacy courses. Instead, authors of this paper suggest a new active model which is based on MS Office simulation. The proposed model was discussed within the framework of three activities: guided software simulation, instructor-led activities, and self-directed learning activities. The model proposed in the paper of active teaching based on software simulation was proven as more effective than traditional.
Monotonic Set-Extended Prefix Rewriting and Verification of Recursive Ping-Pong Protocols
DEFF Research Database (Denmark)
Delzanno, Giorgio; Esparza, Javier; Srba, Jiri
2006-01-01
of messages) some verification problems become decidable. In particular we give an algorithm to decide control state reachability, a problem related to security properties like secrecy and authenticity. The proof is via a reduction to a new prefix rewriting model called Monotonic Set-extended Prefix rewriting...
International Nuclear Information System (INIS)
Torres, Mirta B.; Dominguez, Dany S.
2013-01-01
Nuclear fission devices coupled to particle accelerators ADS are being widely studied. These devices have several applications, including nuclear waste transmutation and producing hydrogen, both applications with strong social and environmental impact. The essence of this work was to model an ADS geometry composed of small TRISO fuel loaded with a mixture of MOX uranium and thorium target material spallation of uranium, using methods of computational modeling probabilistic, in particular the MCNPX 2.6e program to evaluate the physical characteristics of the device and their ability to transmutation. As a result of the characterization of the spallation target, it can be concluded that production of neutrons per incident proton increases with increasing dimensions of the spallation target (thickness and radius), until it reached the maximum production of neutrons per incident proton or call the region saturation. The results obtained in modeling the ADS device bed kind of balls with respect to isotopic variation in the isotopes of plutonium and minor actinides considered in the analysis revealed that accumulation of mass of the isotopes of plutonium and minor actinides increase for subcritical configuration considered. In the particular case of the isotope 239 Pu, it is observed a reduction of the mass from the time of burning of 99 days. The increase of power in the core, whereas tungsten spallation targets and Lead is among the key future developments of this work
Parameter set for computer-assisted texture analysis of fetal brain.
Gentillon, Hugues; Stefańczyk, Ludomir; Strzelecki, Michał; Respondek-Liberska, Maria
2016-11-25
Magnetic resonance data were collected from a diverse population of gravid women to objectively compare the quality of 1.5-tesla (1.5 T) versus 3-T magnetic resonance imaging of the developing human brain. MaZda and B11 computational-visual cognition tools were used to process 2D images. We proposed a wavelet-based parameter and two novel histogram-based parameters for Fisher texture analysis in three-dimensional space. Wavenhl, focus index, and dispersion index revealed better quality for 3 T. Though both 1.5 and 3 T images were 16-bit DICOM encoded, nearly 16 and 12 usable bits were measured in 3 and 1.5 T images, respectively. The four-bit padding observed in 1.5 T K-space encoding mimics noise by adding illusionistic details, which are not really part of the image. In contrast, zero-bit padding in 3 T provides space for storing more details and increases the likelihood of noise but as well as edges, which in turn are very crucial for differentiation of closely related anatomical structures. Both encoding modes are possible with both units, but higher 3 T resolution is the main difference. It contributes to higher perceived and available dynamic range. Apart from surprisingly larger Fisher coefficient, no significant difference was observed when testing was conducted with down-converted 8-bit BMP images.
Computer program to fit a hyperellipse to a set of phase-space points in as many as six dimensions
International Nuclear Information System (INIS)
Wadlinger, E.A.
1980-03-01
A computer program that will fit a hyperellipse to a set of phase-space points in as many as 6 dimensions was written and tested. The weight assigned to the phase-space points can be varied as a function of their distance from the centroid of the distribution. Varying the weight enables determination of whether there is a difference in ellipse orientation between inner and outer particles. This program should be useful in studying the effects of longitudinal and transverse phase-space couplings
International Nuclear Information System (INIS)
Montan, D.N.
1987-02-01
This report is intended to describe, document and provide instructions for the use of new versions of a set of computer programs commonly referred to as the PLUS family. These programs were originally designed to numerically evaluate simple analytical solutions of the diffusion equation. The new versions include linear thermo-elastic effects from thermal fields calculated by the diffusion equation. After the older versions of the PLUS family were documented a year ago, it was realized that the techniques employed in the programs were well suited to the addition of linear thermo-elastic phenomena. This has been implemented and this report describes the additions. 3 refs., 14 figs
Computed Tomographic Window Setting for Bronchial Measurement to Guide Double-Lumen Tube Size.
Seo, Jeong-Hwa; Bae, Jinyoung; Paik, Hyesun; Koo, Chang-Hoon; Bahk, Jae-Hyon
2018-04-01
The bronchial diameter measured on computed tomography (CT) can be used to guide double-lumen tube (DLT) sizes objectively. The bronchus is known to be measured most accurately in the so-called bronchial CT window. The authors investigated whether using the bronchial window results in the selection of more appropriately sized DLTs than using the other windows. CT image analysis and prospective randomized study. Tertiary hospital. Adults receiving left-sided DLTs. The authors simulated selection of DLT sizes based on the left bronchial diameters measured in the lung (width 1,500 Hounsfield unit [HU] and level -700 HU), bronchial (1,000 HU and -450 HU), and mediastinal (400 HU and 25 HU) CT windows. Furthermore, patients were randomly assigned to undergo imaging with either the bronchial or mediastinal window to guide DLT sizes. Using the underwater seal technique, the authors assessed whether the DLT was appropriately sized, undersized, or oversized for the patient. On 130 CT images, the bronchial diameter (9.9 ± 1.2 mm v 10.5 ± 1.3 mm v 11.7 ± 1.3 mm) and the selected DLT size were different in the lung, bronchial, and mediastinal windows, respectively (p study, oversized tubes were chosen less frequently in the bronchial window than in the mediastinal window (6/110 v 23/111; risk ratio 0.38; 95% CI 0.19-0.79; p = 0.003). No tubes were undersized after measurements in these two windows. The bronchial measurement in the bronchial window guided more appropriately sized DLTs compared with the lung or mediastinal windows. Copyright © 2017 Elsevier Inc. All rights reserved.
Cohall, Alwyn T; Dini, Sheila; Senathirajah, Yalini; Nye, Andrea; Neu, Natalie; Powell, Donald; Powell, Borris; Hyden, Christel
2008-01-01
Significant advances in the treatment of human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome (AIDS) place a premium on early detection and linkage to care. Recognizing the need to efficiently yet comprehensively provide HIV counseling, we assessed the feasibility of using audio computer-assisted self-inventory (A-CASI) in a community-based HIV counseling and testing facility. A convenience sample of 50 adults presenting for HIV testing was recruited to complete an 85-item computerized HIV Assessment of Risk Inventory (HARI) containing domains of demographics, sexual behaviors, alcohol and substance use, emotional well-being, past experiences with HIV testing, and attitudes about taking HARI. Client acceptance rate was limited by the completion time outlined during the intake process. However, the majority of respondents who completed HARI felt that it took only a short to moderate time to complete and was easy to understand. A majority also reported a preference for using a computerized format in the future. Further, HARI identified a number of risk-taking behaviors, including unprotected anal sex and substance use prior to past sexual encounters. Additionally, more than half of the sample reported moderate to severe depressive symptoms. Those respondents who had time to complete the survey accepted the A-CASI interview, and it was successful at identifying a substantial level of risk-taking behaviors. A-CASI has the potential to guide HIV counselors in providing risk-reduction counseling and referral activities. However, results suggested the need to shorten the instrument, and further studies are needed to determine applicability in other HIV testing sites.
Varandas, António J. C.
2018-04-01
Because the one-electron basis set limit is difficult to reach in correlated post-Hartree-Fock ab initio calculations, the low-cost route of using methods that extrapolate to the estimated basis set limit attracts immediate interest. The situation is somewhat more satisfactory at the Hartree-Fock level because numerical calculation of the energy is often affordable at nearly converged basis set levels. Still, extrapolation schemes for the Hartree-Fock energy are addressed here, although the focus is on the more slowly convergent and computationally demanding correlation energy. Because they are frequently based on the gold-standard coupled-cluster theory with single, double, and perturbative triple excitations [CCSD(T)], correlated calculations are often affordable only with the smallest basis sets, and hence single-level extrapolations from one raw energy could attain maximum usefulness. This possibility is examined. Whenever possible, this review uses raw data from second-order Møller-Plesset perturbation theory, as well as CCSD, CCSD(T), and multireference configuration interaction methods. Inescapably, the emphasis is on work done by the author's research group. Certain issues in need of further research or review are pinpointed.
International Nuclear Information System (INIS)
Kishimoto, Junichi; Sakou, Toshio; Ohta, Yasutoshi
2013-01-01
The aim of this study was to estimate the tube current on a cardiac computed tomography (CT) from a plain chest CT using CT-automatic exposure control (CT-AEC), to obtain consistent image noise, and to optimize the scan tube current by individualizing the tube current. Sixty-five patients (Group A) underwent cardiac CT at fixed tube current. The mAs value for plain chest CT using CT-AEC (AEC value) and cardiac CT image noise were measured. The tube current needed to obtain the intended level of image noise in the cardiac CT was determined from their correlation. Another 65 patients (Group B) underwent cardiac CT with tube currents individually determined from the AEC value. Image noise was compared among Group A and B. Image noise of cardiac CT in Group B was 24.4±3.1 Hounsfield unit (HU) and was more uniform than in Group A (21.2±6.1 HU). The error with the desired image noise of 25 HU was lower in Group B (2.4%) than in Group A (15.2%). Individualized tube current selection based on AEC value thus provided consistent image noise and a scan tube current optimized for cardiac CT. (author)
P. MacBride
The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...
Predicting Host Level Reachability via Static Analysis of Routing Protocol Configuration
2007-09-01
check_function_bodies = false; SET client_min_messages = warning; -- -- Name: SCHEMA public; Type: COMMENT; Schema: -; Owner: postgres -- COMMENT...public; Owner: mcmanst -- -- -- Name: public; Type: ACL; Schema: -; Owner: postgres -- REVOKE ALL ON SCHEMA public FROM PUBLIC; REVOKE...ALL ON SCHEMA public FROM postgres ; GRANT ALL ON SCHEMA public TO postgres ; GRANT ALL ON SCHEMA public TO PUBLIC; -- -- PostgreSQL database
Directory of Open Access Journals (Sweden)
K. Ide
2002-01-01
Full Text Available In this paper we develop analytical and numerical methods for finding special hyperbolic trajectories that govern geometry of Lagrangian structures in time-dependent vector fields. The vector fields (or velocity fields may have arbitrary time dependence and be realized only as data sets over finite time intervals, where space and time are discretized. While the notion of a hyperbolic trajectory is central to dynamical systems theory, much of the theoretical developments for Lagrangian transport proceed under the assumption that such a special hyperbolic trajectory exists. This brings in new mathematical issues that must be addressed in order for Lagrangian transport theory to be applicable in practice, i.e. how to determine whether or not such a trajectory exists and, if it does exist, how to identify it in a sequence of instantaneous velocity fields. We address these issues by developing the notion of a distinguished hyperbolic trajectory (DHT. We develop an existence criteria for certain classes of DHTs in general time-dependent velocity fields, based on the time evolution of Eulerian structures that are observed in individual instantaneous fields over the entire time interval of the data set. We demonstrate the concept of DHTs in inhomogeneous (or "forced" time-dependent linear systems and develop a theory and analytical formula for computing DHTs. Throughout this work the notion of linearization is very important. This is not surprising since hyperbolicity is a "linearized" notion. To extend the analytical formula to more general nonlinear time-dependent velocity fields, we develop a series of coordinate transforms including a type of linearization that is not typically used in dynamical systems theory. We refer to it as Eulerian linearization, which is related to the frame independence of DHTs, as opposed to the Lagrangian linearization, which is typical in dynamical systems theory, which is used in the computation of Lyapunov exponents. We
Directory of Open Access Journals (Sweden)
Hassan Khassehkhan
2016-09-01
Full Text Available We study a previously introduced mathematical model of amensalistic control of the foodborne pathogen Listeria monocytogenes by the generally regarded as safe lactic acid bacteria Lactococcus lactis in a chemostat setting under nutrient rich growth conditions. The control agent produces lactic acids and thus affects pH in the environment such that it becomes detrimental to the pathogen while it is much more tolerant to these self-inflicted environmental changes itself. The mathematical model consists of five nonlinear ordinary differential equations for both bacterial species, the concentration of lactic acids, the pH and malate. The model is algebraically too involved to allow a comprehensive, rigorous qualitative analysis. Therefore, we conduct a computational study. Our results imply that depending on the growth characteristics of the medium in which the bacteria are cultured, the pathogen can survive in an intermediate flow regime but will be eradicated for slower flow rates and washed out for higher flow rates.
International Nuclear Information System (INIS)
Ayoola, E.O.
2004-05-01
We prove that a multifunction associated with the set of solutions of Lipschitzian quantum stochastic differential inclusion (QSDI) admits a selection continuous from some subsets of complex numbers to the space of the matrix elements of adapted weakly absolutely continuous quantum stochastic processes. In particular, we show that the solution set map as well as the reachable set of the QSDI admit some continuous representations. (author)
Directory of Open Access Journals (Sweden)
Luís Ronan Marquez Ferreira de Souza
2007-03-01
Full Text Available CONTEXT AND OBJECTIVE: Recent studies have shown noncontrast computed tomography (NCT to be more effective than ultrasound (US for imaging acute ureterolithiasis. However, to our knowledge, there are few studies directly comparing these techniques in an emergency teaching hospital setting. The objectives of this study were to compare the diagnostic accuracy of US and NCT performed by senior radiology residents for diagnosing acute ureterolithiasis; and to assess interobserver agreement on tomography interpretations by residents and experienced abdominal radiologists. DESIGN AND SETTING: Prospective study of 52 consecutive patients, who underwent both US and NCT within an interval of eight hours, at Hospital São Paulo. METHODS: US scans were performed by senior residents and read by experienced radiologists. NCT scan images were read by senior residents, and subsequently by three abdominal radiologists. The interobserver variability was assessed using the kappa statistic. RESULTS: Ureteral calculi were found in 40 out of 52 patients (77%. US presented sensitivity of 22% and specificity of 100%. When collecting system dilatation was associated, US demonstrated 73% sensitivity, 82% specificity. The interobserver agreement in NCT analysis was very high with regard to identification of calculi, collecting system dilatation and stranding of perinephric fat. CONCLUSIONS: US has limited value for identifying ureteral calculi in comparison with NCT, even when collecting system dilatation is present. Residents and abdominal radiologists demonstrated excellent agreement rates for ureteral calculi, identification of collecting system dilatation and stranding of perinephric fat on NCT.
Bieberle, M; Hampel, U
2015-06-13
Tomographic image reconstruction is based on recovering an object distribution from its projections, which have been acquired from all angular views around the object. If the angular range is limited to less than 180° of parallel projections, typical reconstruction artefacts arise when using standard algorithms. To compensate for this, specialized algorithms using a priori information about the object need to be applied. The application behind this work is ultrafast limited-angle X-ray computed tomography of two-phase flows. Here, only a binary distribution of the two phases needs to be reconstructed, which reduces the complexity of the inverse problem. To solve it, a new reconstruction algorithm (LSR) based on the level-set method is proposed. It includes one force function term accounting for matching the projection data and one incorporating a curvature-dependent smoothing of the phase boundary. The algorithm has been validated using simulated as well as measured projections of known structures, and its performance has been compared to the algebraic reconstruction technique and a binary derivative of it. The validation as well as the application of the level-set reconstruction on a dynamic two-phase flow demonstrated its applicability and its advantages over other reconstruction algorithms. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Sahi, Kamal; Jackson, Stuart; Wiebe, Edward; Armstrong, Gavin; Winters, Sean; Moore, Ronald; Low, Gavin
2014-02-01
To assess if "liver window" settings improve the conspicuity of small renal cell carcinomas (RCC). Patients were analysed from our institution's pathology-confirmed RCC database that included the following: (1) stage T1a RCCs, (2) an unenhanced computed tomography (CT) abdomen performed ≤ 6 months before histologic diagnosis, and (3) age ≥ 17 years. Patients with multiple tumours, prior nephrectomy, von Hippel-Lindau disease, and polycystic kidney disease were excluded. The unenhanced CT was analysed, and the tumour locations were confirmed by using corresponding contrast-enhanced CT or magnetic resonance imaging studies. Representative single-slice axial, coronal, and sagittal unenhanced CT images were acquired in "soft tissue windows" (width, 400 Hounsfield unit (HU); level, 40 HU) and liver windows (width, 150 HU; level, 88 HU). In addition, single-slice axial, coronal, and sagittal unenhanced CT images of nontumourous renal tissue (obtained from the same cases) were acquired in soft tissue windows and liver windows. These data sets were randomized, unpaired, and were presented independently to 3 blinded radiologists for analysis. The presence or absence of suspicious findings for tumour was scored on a 5-point confidence scale. Eighty-three of 415 patients met the study criteria. Receiver operating characteristics (ROC) analysis, t test analysis, and kappa analysis were used. ROC analysis showed statistically superior diagnostic performance for liver windows compared with soft tissue windows (area under the curve of 0.923 vs 0.879; P = .0002). Kappa statistics showed "good" vs "moderate" agreement between readers for liver windows compared with soft tissue windows. Use of liver windows settings improves the detection of small RCCs on the unenhanced CT. Copyright © 2014 Canadian Association of Radiologists. Published by Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Yukinori Sakao
Full Text Available BACKGROUND: We aimed to clarify that the size of the lung adenocarcinoma evaluated using mediastinal window on computed tomography is an important and useful modality for predicting invasiveness, lymph node metastasis and prognosis in small adenocarcinoma. METHODS: We evaluated 176 patients with small lung adenocarcinomas (diameter, 1-3 cm who underwent standard surgical resection. Tumours were examined using computed tomography with thin section conditions (1.25 mm thick on high-resolution computed tomography with tumour dimensions evaluated under two settings: lung window and mediastinal window. We also determined the patient age, gender, preoperative nodal status, tumour size, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and pathological status (lymphatic vessel, vascular vessel or pleural invasion. Recurrence-free survival was used for prognosis. RESULTS: Lung window, mediastinal window, tumour disappearance ratio and preoperative nodal status were significant predictive factors for recurrence-free survival in univariate analyses. Areas under the receiver operator curves for recurrence were 0.76, 0.73 and 0.65 for mediastinal window, tumour disappearance ratio and lung window, respectively. Lung window, mediastinal window, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and preoperative nodal status were significant predictive factors for lymph node metastasis in univariate analyses; areas under the receiver operator curves were 0.61, 0.76, 0.72 and 0.66, for lung window, mediastinal window, tumour disappearance ratio and preoperative serum carcinoembryonic antigen levels, respectively. Lung window, mediastinal window, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and preoperative nodal status were significant factors for lymphatic vessel, vascular vessel or pleural invasion in univariate analyses; areas under the receiver operator curves were 0
Sakao, Yukinori; Kuroda, Hiroaki; Mun, Mingyon; Uehara, Hirofumi; Motoi, Noriko; Ishikawa, Yuichi; Nakagawa, Ken; Okumura, Sakae
2014-01-01
Background We aimed to clarify that the size of the lung adenocarcinoma evaluated using mediastinal window on computed tomography is an important and useful modality for predicting invasiveness, lymph node metastasis and prognosis in small adenocarcinoma. Methods We evaluated 176 patients with small lung adenocarcinomas (diameter, 1–3 cm) who underwent standard surgical resection. Tumours were examined using computed tomography with thin section conditions (1.25 mm thick on high-resolution computed tomography) with tumour dimensions evaluated under two settings: lung window and mediastinal window. We also determined the patient age, gender, preoperative nodal status, tumour size, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and pathological status (lymphatic vessel, vascular vessel or pleural invasion). Recurrence-free survival was used for prognosis. Results Lung window, mediastinal window, tumour disappearance ratio and preoperative nodal status were significant predictive factors for recurrence-free survival in univariate analyses. Areas under the receiver operator curves for recurrence were 0.76, 0.73 and 0.65 for mediastinal window, tumour disappearance ratio and lung window, respectively. Lung window, mediastinal window, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and preoperative nodal status were significant predictive factors for lymph node metastasis in univariate analyses; areas under the receiver operator curves were 0.61, 0.76, 0.72 and 0.66, for lung window, mediastinal window, tumour disappearance ratio and preoperative serum carcinoembryonic antigen levels, respectively. Lung window, mediastinal window, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and preoperative nodal status were significant factors for lymphatic vessel, vascular vessel or pleural invasion in univariate analyses; areas under the receiver operator curves were 0.60, 0.81, 0
Matthias Kasemann
Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...
Directory of Open Access Journals (Sweden)
Johannes Lohmann
2018-05-01
Full Text Available Spatial, physical, and semantic magnitude dimensions can influence action decisions in human cognitive processing and interact with each other. For example, in the spatial-numerical associations of response code (SNARC effect, semantic numerical magnitude facilitates left-hand or right-hand responding dependent on the small or large magnitude of number symbols. SNARC-like interactions of numerical magnitudes with the radial spatial dimension (depth were postulated from early on. Usually, the SNARC effect in any direction is investigated using fronto-parallel computer monitors for presentation of stimuli. In such 2D setups, however, the metaphorical and literal interpretation of the radial depth axis with seemingly close/far stimuli or responses are not distinct. Hence, it is difficult to draw clear conclusions with respect to the contribution of different spatial mappings to the SNARC effect. In order to disentangle the different mappings in a natural way, we studied parametrical interactions between semantic numerical magnitude, horizontal directional responses, and perceptual distance by means of stereoscopic depth in an immersive virtual reality (VR. Two VR experiments show horizontal SNARC effects across all spatial displacements in traditional latency measures and kinematic response parameters. No indications of a SNARC effect along the depth axis, as it would be predicted by a direct mapping account, were observed, but the results show a non-linear relationship between horizontal SNARC slopes and physical distance. Steepest SNARC slopes were observed for digits presented close to the hands. We conclude that spatial-numerical processing is susceptible to effector-based processes but relatively resilient to task-irrelevant variations of radial-spatial magnitudes.
Lohmann, Johannes; Schroeder, Philipp A; Nuerk, Hans-Christoph; Plewnia, Christian; Butz, Martin V
2018-01-01
Spatial, physical, and semantic magnitude dimensions can influence action decisions in human cognitive processing and interact with each other. For example, in the spatial-numerical associations of response code (SNARC) effect, semantic numerical magnitude facilitates left-hand or right-hand responding dependent on the small or large magnitude of number symbols. SNARC-like interactions of numerical magnitudes with the radial spatial dimension (depth) were postulated from early on. Usually, the SNARC effect in any direction is investigated using fronto-parallel computer monitors for presentation of stimuli. In such 2D setups, however, the metaphorical and literal interpretation of the radial depth axis with seemingly close/far stimuli or responses are not distinct. Hence, it is difficult to draw clear conclusions with respect to the contribution of different spatial mappings to the SNARC effect. In order to disentangle the different mappings in a natural way, we studied parametrical interactions between semantic numerical magnitude, horizontal directional responses, and perceptual distance by means of stereoscopic depth in an immersive virtual reality (VR). Two VR experiments show horizontal SNARC effects across all spatial displacements in traditional latency measures and kinematic response parameters. No indications of a SNARC effect along the depth axis, as it would be predicted by a direct mapping account, were observed, but the results show a non-linear relationship between horizontal SNARC slopes and physical distance. Steepest SNARC slopes were observed for digits presented close to the hands. We conclude that spatial-numerical processing is susceptible to effector-based processes but relatively resilient to task-irrelevant variations of radial-spatial magnitudes.
Longoni, Gianluca
In the nuclear science and engineering field, radiation transport calculations play a key-role in the design and optimization of nuclear devices. The linear Boltzmann equation describes the angular, energy and spatial variations of the particle or radiation distribution. The discrete ordinates method (S N) is the most widely used technique for solving the linear Boltzmann equation. However, for realistic problems, the memory and computing time require the use of supercomputers. This research is devoted to the development of new formulations for the SN method, especially for highly angular dependent problems, in parallel environments. The present research work addresses two main issues affecting the accuracy and performance of SN transport theory methods: quadrature sets and acceleration techniques. New advanced quadrature techniques which allow for large numbers of angles with a capability for local angular refinement have been developed. These techniques have been integrated into the 3-D SN PENTRAN (Parallel Environment Neutral-particle TRANsport) code and applied to highly angular dependent problems, such as CT-Scan devices, that are widely used to obtain detailed 3-D images for industrial/medical applications. In addition, the accurate simulation of core physics and shielding problems with strong heterogeneities and transport effects requires the numerical solution of the transport equation. In general, the convergence rate of the solution methods for the transport equation is reduced for large problems with optically thick regions and scattering ratios approaching unity. To remedy this situation, new acceleration algorithms based on the Even-Parity Simplified SN (EP-SSN) method have been developed. A new stand-alone code system, PENSSn (Parallel Environment Neutral-particle Simplified SN), has been developed based on the EP-SSN method. The code is designed for parallel computing environments with spatial, angular and hybrid (spatial/angular) domain
Song, Lei; Gao, Jungang; Wang, Sheng; Hu, Huasi; Guo, Youmin
2017-01-01
Estimation of the pleural effusion's volume is an important clinical issue. The existing methods cannot assess it accurately when there is large volume of liquid in the pleural cavity and/or the patient has some other disease (e.g. pneumonia). In order to help solve this issue, the objective of this study is to develop and test a novel algorithm using B-spline and local clustering level set method jointly, namely BLL. The BLL algorithm was applied to a dataset involving 27 pleural effusions detected on chest CT examination of 18 adult patients with the presence of free pleural effusion. Study results showed that average volumes of pleural effusion computed using the BLL algorithm and assessed manually by the physicians were 586 ml±339 ml and 604±352 ml, respectively. For the same patient, the volume of the pleural effusion, segmented semi-automatically, was 101.8% ±4.6% of that was segmented manually. Dice similarity was found to be 0.917±0.031. The study demonstrated feasibility of applying the new BLL algorithm to accurately measure the volume of pleural effusion.
International Nuclear Information System (INIS)
Wittenberg, Rianne; Peters, Joost F.; Sonnemans, Jeroen J.; Prokop, Mathias; Schaefer-Prokop, Cornelia M.
2010-01-01
The purpose of the study was to assess the stand-alone performance of computer-assisted detection (CAD) for evaluation of pulmonary CT angiograms (CTPA) performed in an on-call setting. In this institutional review board-approved study, we retrospectively included 292 consecutive CTPA performed during night shifts and weekends over a period of 16 months. Original reports were compared with a dedicated CAD system for pulmonary emboli (PE). A reference standard for the presence of PE was established using independent evaluation by two readers and consultation of a third experienced radiologist in discordant cases. Original reports had described 225 negative studies and 67 positive studies for PE. CAD found PE in seven patients originally reported as negative but identified by independent evaluation: emboli were located in segmental (n = 2) and subsegmental arteries (n = 5). The negative predictive value (NPV) of the CAD algorithm was 92% (44/48). On average there were 4.7 false positives (FP) per examination (median 2, range 0-42). In 72% of studies ≤5 FP were found, 13% of studies had ≥10 FP. CAD identified small emboli originally missed under clinical conditions and found 93% of the isolated subsegmental emboli. On average there were 4.7 FP per examination. (orig.)
Nagao, Chioko; Izako, Nozomi; Soga, Shinji; Khan, Samia Haseeb; Kawabata, Shigeki; Shirai, Hiroki; Mizuguchi, Kenji
2012-10-01
Proteins interact with different partners to perform different functions and it is important to elucidate the determinants of partner specificity in protein complex formation. Although methods for detecting specificity determining positions have been developed previously, direct experimental evidence for these amino acid residues is scarce, and the lack of information has prevented further computational studies. In this article, we constructed a dataset that is likely to exhibit specificity in protein complex formation, based on available crystal structures and several intuitive ideas about interaction profiles and functional subclasses. We then defined a "structure-based specificity determining position (sbSDP)" as a set of equivalent residues in a protein family showing a large variation in their interaction energy with different partners. We investigated sequence and structural features of sbSDPs and demonstrated that their amino acid propensities significantly differed from those of other interacting residues and that the importance of many of these residues for determining specificity had been verified experimentally. Copyright © 2012 Wiley Periodicals, Inc.
International Nuclear Information System (INIS)
Lim, Sin Liang; Koo, Voon Chet; Daya Sagar, B.S.
2009-01-01
Multiscale convexity analysis of certain fractal binary objects-like 8-segment Koch quadric, Koch triadic, and random Koch quadric and triadic islands-is performed via (i) morphologic openings with respect to recursively changing the size of a template, and (ii) construction of convex hulls through half-plane closings. Based on scale vs convexity measure relationship, transition levels between the morphologic regimes are determined as crossover scales. These crossover scales are taken as the basis to segment binary fractal objects into various morphologically prominent zones. Each segmented zone is characterized through normalized morphologic complexity measures. Despite the fact that there is no notably significant relationship between the zone-wise complexity measures and fractal dimensions computed by conventional box counting method, fractal objects-whether they are generated deterministically or by introducing randomness-possess morphologically significant sub-zones with varied degrees of spatial complexities. Classification of realistic fractal sets and/or fields according to sub-zones possessing varied degrees of spatial complexities provides insight to explore links with the physical processes involved in the formation of fractal-like phenomena.
Finley, Gail T.
1988-01-01
This report covers the study of the relational database implementation in the NASCAD computer program system. The existing system is used primarily for computer aided design. Attention is also directed to a hidden-surface algorithm for final drawing output.
Siegelmann-Danieli, Nava; Farkash, Ariel; Katzir, Itzhak; Vesterman Landes, Janet; Rotem Rabinovich, Hadas; Lomnicky, Yossef; Carmeli, Boaz; Parush-Shear-Yashuv, Naama
2016-01-01
Randomized clinical trials constitute the gold-standard for evaluating new anti-cancer therapies; however, real-life data are key in complementing clinically useful information. We developed a computational tool for real-life data analysis and applied it to the metastatic colorectal cancer (mCRC) setting. This tool addressed the impact of oncology/non-oncology parameters on treatment patterns and clinical outcomes. The developed tool enables extraction of any computerized information including comorbidities and use of drugs (oncological/non-oncological) per individual HMO member. The study in which we evaluated this tool was a retrospective cohort study that included Maccabi Healthcare Services members with mCRC receiving bevacizumab with fluoropyrimidines (FP), FP plus oxaliplatin (FP-O), or FP plus irinotecan (FP-I) in the first-line between 9/2006 and 12/2013. The analysis included 753 patients of whom 15.4% underwent subsequent metastasectomy (the Surgery group). For the entire cohort, median overall survival (OS) was 20.5 months; in the Surgery group, median duration of bevacizumab-containing therapy (DOT) pre-surgery was 6.1 months; median OS was not reached. In the Non-surgery group, median OS and DOT were 18.7 and 11.4 months, respectively; no significant OS differences were noted between FP-O and FP-I, whereas FP use was associated with shorter OS (12.3 month; p controlling for age and gender) identified several non-oncology parameters associated with poorer clinical outcomes including concurrent use of diuretics and proton-pump inhibitors. Our tool provided insights that confirmed/complemented information gained from randomized-clinical trials. Prospective tool implementation is warranted.
Wirth, K; Zielinski, P; Trinter, T; Stahl, R; Mück, F; Reiser, M; Wirth, S
2016-08-01
In hospitals, the radiological services provided to non-privately insured in-house patients are mostly distributed to requesting disciplines through internal cost allocation (ICA). In many institutions, computed tomography (CT) is the modality with the largest amount of allocation credits. The aim of this work is to compare the ICA to respective DRG (Diagnosis Related Groups) shares for diagnostic CT services in a university hospital setting. The data from four CT scanners in a large university hospital were processed for the 2012 fiscal year. For each of the 50 DRG groups with the most case-mix points, all diagnostic CT services were documented including their respective amount of GOÄ allocation credits and invoiced ICA value. As the German Institute for Reimbursement of Hospitals (InEK) database groups the radiation disciplines (radiology, nuclear medicine and radiation therapy) together and also lacks any modality differentiation, the determination of the diagnostic CT component was based on the existing institutional distribution of ICA allocations. Within the included 24,854 cases, 63,062,060 GOÄ-based performance credits were counted. The ICA relieved these diagnostic CT services by € 819,029 (single credit value of 1.30 Eurocent), whereas accounting by using DRG shares would have resulted in € 1,127,591 (single credit value of 1.79 Eurocent). The GOÄ single credit value is 5.62 Eurocent. The diagnostic CT service was basically rendered as relatively inexpensive. In addition to a better financial result, changing the current ICA to DRG shares might also mean a chance for real revenues. However, the attractiveness considerably depends on how the DRG shares are distributed to the different radiation disciplines of one institution.
International Nuclear Information System (INIS)
Hsu, Christina M. L.; Palmeri, Mark L.; Segars, W. Paul; Veress, Alexander I.; Dobbins, James T. III
2013-01-01
Purpose: The authors previously reported on a three-dimensional computer-generated breast phantom, based on empirical human image data, including a realistic finite-element based compression model that was capable of simulating multimodality imaging data. The computerized breast phantoms are a hybrid of two phantom generation techniques, combining empirical breast CT (bCT) data with flexible computer graphics techniques. However, to date, these phantoms have been based on single human subjects. In this paper, the authors report on a new method to generate multiple phantoms, simulating additional subjects from the limited set of original dedicated breast CT data. The authors developed an image morphing technique to construct new phantoms by gradually transitioning between two human subject datasets, with the potential to generate hundreds of additional pseudoindependent phantoms from the limited bCT cases. The authors conducted a preliminary subjective assessment with a limited number of observers (n= 4) to illustrate how realistic the simulated images generated with the pseudoindependent phantoms appeared. Methods: Several mesh-based geometric transformations were developed to generate distorted breast datasets from the original human subject data. Segmented bCT data from two different human subjects were used as the “base” and “target” for morphing. Several combinations of transformations were applied to morph between the “base’ and “target” datasets such as changing the breast shape, rotating the glandular data, and changing the distribution of the glandular tissue. Following the morphing, regions of skin and fat were assigned to the morphed dataset in order to appropriately assign mechanical properties during the compression simulation. The resulting morphed breast was compressed using a finite element algorithm and simulated mammograms were generated using techniques described previously. Sixty-two simulated mammograms, generated from morphing
B. Koren (Barry); M.R. Lewis; E.H. van Brummelen (Harald); B. van Leer
2001-01-01
textabstractA finite-volume method is presented for the computation of compressible flows of two immiscible fluids at very different densities. The novel ingredient in the method is a two-fluid linearized Godunov scheme, allowing for flux computations in case of different fluids (e.g., water and
Directory of Open Access Journals (Sweden)
Nava Siegelmann-Danieli
Full Text Available Randomized clinical trials constitute the gold-standard for evaluating new anti-cancer therapies; however, real-life data are key in complementing clinically useful information. We developed a computational tool for real-life data analysis and applied it to the metastatic colorectal cancer (mCRC setting. This tool addressed the impact of oncology/non-oncology parameters on treatment patterns and clinical outcomes.The developed tool enables extraction of any computerized information including comorbidities and use of drugs (oncological/non-oncological per individual HMO member. The study in which we evaluated this tool was a retrospective cohort study that included Maccabi Healthcare Services members with mCRC receiving bevacizumab with fluoropyrimidines (FP, FP plus oxaliplatin (FP-O, or FP plus irinotecan (FP-I in the first-line between 9/2006 and 12/2013.The analysis included 753 patients of whom 15.4% underwent subsequent metastasectomy (the Surgery group. For the entire cohort, median overall survival (OS was 20.5 months; in the Surgery group, median duration of bevacizumab-containing therapy (DOT pre-surgery was 6.1 months; median OS was not reached. In the Non-surgery group, median OS and DOT were 18.7 and 11.4 months, respectively; no significant OS differences were noted between FP-O and FP-I, whereas FP use was associated with shorter OS (12.3 month; p <0.002; notably, these patients were older. Patients who received both FP-O- and FP-I-based regimens achieved numerically longer OS vs. those who received only one of these regimens (22.1 [19.9-24.0] vs. 18.9 [15.5-21.9] months. Among patients assessed for wild-type KRAS and treated with subsequent anti-EGFR agent, OS was 25.4 months and 18.7 months for 124 treated vs. 37 non-treated patients (non-significant. Cox analysis (controlling for age and gender identified several non-oncology parameters associated with poorer clinical outcomes including concurrent use of diuretics and proton
International Nuclear Information System (INIS)
Pathak, Pankaj; Kumar, Rajesh; Birbiya, Narendra; Mishra, Praveen Kumar; Singh, Manisha; Mishra, Pankaj Kumar
2017-01-01
To confirm the accuracy of the location of the Fudicial markings in relation to the actual isocentre of the irradiated volume due to Intra-fractional and Set-Up changes in Cancer Cervix with the help of Cone Beam computed Tomography (CBCT)
Nikolopoulou, Kleopatra; Gialamas, Vasilis
2009-01-01
This paper discusses the compilation of an instrument in order to investigate pre-service early childhood teachers' views and intentions about integrating and using computers in early childhood settings. For the purpose of this study a questionnaire was compiled and administered to 258 pre-service early childhood teachers (PECTs), in Greece. A…
International Nuclear Information System (INIS)
Li, X.L.
1993-01-01
Computation of three-dimensional (3-D) Rayleigh--Taylor instability in compressible fluids is performed on a MIMD computer. A second-order TVD scheme is applied with a fully parallelized algorithm to the 3-D Euler equations. The computational program is implemented for a 3-D study of bubble evolution in the Rayleigh--Taylor instability with varying bubble aspect ratio and for large-scale simulation of a 3-D random fluid interface. The numerical solution is compared with the experimental results by Taylor
Efficient One-click Browsing of Large Trajectory Sets
DEFF Research Database (Denmark)
Krogh, Benjamin Bjerre; Andersen, Ove; Lewis-Kelham, Edwin
2014-01-01
presents a novel query type called sheaf, where users can browse trajectory data sets using a single mouse click. Sheaves are very versatile and can be used for location-based advertising, travel-time analysis, intersection analysis, and reachability analysis (isochrones). A novel in-memory trajectory...... index compresses the data by a factor of 12.4 and enables execution of sheaf queries in 40 ms. This is up to 2 orders of magnitude faster than existing work. We demonstrate the simplicity, versatility, and efficiency of sheaf queries using a real-world trajectory set consisting of 2.7 million...
Sakao, Yukinori; Kuroda, Hiroaki; Mun, Mingyon; Uehara, Hirofumi; Motoi, Noriko; Ishikawa, Yuichi; Nakagawa, Ken; Okumura, Sakae
2014-01-01
BACKGROUND: We aimed to clarify that the size of the lung adenocarcinoma evaluated using mediastinal window on computed tomography is an important and useful modality for predicting invasiveness, lymph node metastasis and prognosis in small adenocarcinoma. METHODS: We evaluated 176 patients with small lung adenocarcinomas (diameter, 1-3 cm) who underwent standard surgical resection. Tumours were examined using computed tomography with thin section conditions (1.25 mm thick on high-resolution ...
M. Kasemann
Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...
I. Fisk
2011-01-01
Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...
P. McBride
The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...
M. Kasemann
Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...
Unglert, Johannes; Hoekstra, Sipke; Jauregui Becker, Juan Manuel
2017-01-01
This paper describes research conducted in the context of an industrial case dealing with the design of re configurable cellular manufacturing systems. Reconfiguring such systems represents a complex task due to the interdependences between the constituent subsystems. A novel computational tool was
Collis, Betty; Margaryan, A.
2004-01-01
Business needs in many corporations call for learning outcomes that involve problem solutions, and creating and sharing new knowledge within worksplace situation that may involve collaboration among members of a team. We argue that work-based activities (WBA) and computer-supported collaborative
I. Fisk
2013-01-01
Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...
Kostoulas, Theodoros; Chanel, Guillaume; Muszynski, Michal; Lombardo, Patrizia; Pun, Thierry
2017-01-01
Over the last years, affective computing has been strengthening its ties with the humanities, exploring and building understanding of people’s responses to specific artistic multimedia stimuli. “Aesthetic experience” is acknowledged to be the subjective part of some artistic exposure, namely, the inner affective state of a person exposed to some artistic object. In this work, we describe ongoing research activities for studying the aesthetic experience of people when exposed to movie artistic...
Kurth, Ann E; Severynen, Anneleen; Spielberg, Freya
2013-08-01
HIV testing in emergency departments (EDs) remains underutilized. The authors evaluated a computer tool to facilitate rapid HIV testing in an urban ED. Randomly assigned nonacute adult ED patients were randomly assigned to a computer tool (CARE) and rapid HIV testing before a standard visit (n = 258) or to a standard visit (n = 259) with chart access. The authors assessed intervention acceptability and compared noted HIV risks. Participants were 56% nonWhite and 58% male; median age was 37 years. In the CARE arm, nearly all (251/258) of the patients completed the session and received HIV results; four declined to consent to the test. HIV risks were reported by 54% of users; one participant was confirmed HIV-positive, and two were confirmed false-positive (seroprevalence 0.4%, 95% CI [0.01, 2.2]). Half (55%) of the patients preferred computerized rather than face-to-face counseling for future HIV testing. In the standard arm, one HIV test and two referrals for testing occurred. Computer-facilitated HIV testing appears acceptable to ED patients. Future research should assess cost-effectiveness compared with staff-delivered approaches.
I. Fisk
2010-01-01
Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...
M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley
Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...
Iriki, Atsushi
2016-03-01
;Language-READY brain; in the title of this article [1] seems to be the expression that the author prefers to use to illustrate his theoretical framework. The usage of the term ;READY; appears to be of extremely deep connotation, for three reasons. Firstly, of course it needs a ;principle; - the depth and the width of the computational theory depicted here is as expected from the author's reputation. However, ;readiness; implies that it is much more than just ;a theory;. That is, such a principle is not static, but it rather has dynamic properties, which are ready to gradually proceed to flourish once brains are put in adequate conditions to make time progressions - namely, evolution and development. So the second major connotation is that this article brought in the perspectives of the comparative primatology as a tool to relativise the language-realizing human brains among other animal species, primates in particular, in the context of evolutionary time scale. The tertiary connotation lies in the context of the developmental time scale. The author claims that it is the interaction of the newborn with its care takers, namely its mother and other family or social members in its ecological conditions, that brings the brain mechanism subserving language faculty to really mature to its final completion. Taken together, this article proposes computational theories and mechanisms of Evo-Devo-Eco interactions for language acquisition in the human brains.
Warris, Sven; Boymans, Sander; Muiser, Iwe; Noback, Michiel; Krijnen, Wim; Nap, Jan-Peter
2014-01-13
Small RNAs are important regulators of genome function, yet their prediction in genomes is still a major computational challenge. Statistical analyses of pre-miRNA sequences indicated that their 2D structure tends to have a minimal free energy (MFE) significantly lower than MFE values of equivalently randomized sequences with the same nucleotide composition, in contrast to other classes of non-coding RNA. The computation of many MFEs is, however, too intensive to allow for genome-wide screenings. Using a local grid infrastructure, MFE distributions of random sequences were pre-calculated on a large scale. These distributions follow a normal distribution and can be used to determine the MFE distribution for any given sequence composition by interpolation. It allows on-the-fly calculation of the normal distribution for any candidate sequence composition. The speedup achieved makes genome-wide screening with this characteristic of a pre-miRNA sequence practical. Although this particular property alone will not be able to distinguish miRNAs from other sequences sufficiently discriminative, the MFE-based P-value should be added to the parameters of choice to be included in the selection of potential miRNA candidates for experimental verification.
P. McBride
It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...
M. Kasemann
Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...
I. Fisk
2011-01-01
Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...
I. Fisk
2012-01-01
Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...
M. Kasemann
CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes. Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...
I. Fisk
2010-01-01
Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...
M. Kasemann
Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...
International Nuclear Information System (INIS)
Mansuy, Guy
1973-01-01
This report presents a simulation software which belongs to a set of software aimed at the design, analysis, test and tracing of electronic and logical assemblies. This software simulates the operation in time, and considers the propagation of signals through the network elements, with taking the delay created by each of them into account. The author presents some generalities (modules, description, library, simulation of a network in function of time), proposes a general and then a detailed description of the software: data interpretation, processing of dynamic data and network simulation, display of results on a graphical workstation
Directory of Open Access Journals (Sweden)
Ali Dashti
Full Text Available This paper presents an implementation of the brute-force exact k-Nearest Neighbor Graph (k-NNG construction for ultra-large high-dimensional data cloud. The proposed method uses Graphics Processing Units (GPUs and is scalable with multi-levels of parallelism (between nodes of a cluster, between different GPUs on a single node, and within a GPU. The method is applicable to homogeneous computing clusters with a varying number of nodes and GPUs per node. We achieve a 6-fold speedup in data processing as compared with an optimized method running on a cluster of CPUs and bring a hitherto impossible [Formula: see text]-NNG generation for a dataset of twenty million images with 15 k dimensionality into the realm of practical possibility.
International Nuclear Information System (INIS)
Feller, D.; Peterson, K.A.
1998-01-01
The Gaussian-2 (G2) collection of atoms and molecules has been studied with Hartree endash Fock and correlated levels of theory, ranging from second-order perturbation theory to coupled cluster theory with noniterative inclusion of triple excitations. By exploiting the systematic convergence properties of the correlation consistent family of basis sets, complete basis set limits were estimated for a large number of the G2 energetic properties. Deviations with respect to experimentally derived energy differences corresponding to rigid molecules were obtained for 15 basis set/method combinations, as well as the estimated complete basis set limit. The latter values are necessary for establishing the intrinsic error for each method. In order to perform this analysis, the information generated in the present study was combined with the results of many previous benchmark studies in an electronic database, where it is available for use by other software tools. Such tools can assist users of electronic structure codes in making appropriate basis set and method choices that will increase the likelihood of achieving their accuracy goals without wasteful expenditures of computer resources. copyright 1998 American Institute of Physics
Directory of Open Access Journals (Sweden)
Mehrdad Shahmohammadi Beni
2017-06-01
Full Text Available Cold plasmas were proposed for treatment of leukemia. In the present work, conceptual designs of mixing chambers that increased the contact between the two fluids (plasma and blood through addition of obstacles within rectangular-block-shaped chambers were proposed and the dynamic mixing between the plasma and blood were studied using the level set method coupled with heat transfer. Enhancement of mixing between blood and plasma in the presence of obstacles was demonstrated. Continuous tracking of fluid mixing with determination of temperature distributions was enabled by the present model, which would be a useful tool for future development of cold plasma devices for treatment of blood-related diseases such as leukemia.
Fisher, Jeffrey D; Amico, K Rivet; Fisher, William A; Cornman, Deborah H; Shuper, Paul A; Trayling, Cynthia; Redding, Caroline; Barta, William; Lemieux, Anthony F; Altice, Frederick L; Dieckhaus, Kevin; Friedland, Gerald
2011-11-01
We evaluated the efficacy of LifeWindows, a theory-based, computer-administered antiretroviral (ARV) therapy adherence support intervention, delivered to HIV + patients at routine clinical care visits. 594 HIV + adults receiving HIV care at five clinics were randomized to intervention or control arms. Intervention vs. control impact in the intent-to-treat sample (including participants whose ARVs had been entirely discontinued, who infrequently attended care, or infrequently used LifeWindows) did not reach significance. Intervention impact in the On Protocol sample (328 intervention and control arm participants whose ARVs were not discontinued, who attended care and were exposed to LifeWindows regularly) was significant. On Protocol intervention vs. control participants achieved significantly higher levels of perfect 3-day ACTG-assessed adherence over time, with sensitivity analyses maintaining this effect down to 70% adherence. This study supports the utility of LifeWindows and illustrates that patients on ARVs who persist in care at clinical care sites can benefit from adherence promotion software.
DEFF Research Database (Denmark)
Jensen, Margit Bak
2009-01-01
for a minimum of 2 and 4 portions, respectively, whereas low-fed calves ingested their milk in 2.4 and 4.4 meals for a minimum of 2 and 4 portions, respectively. Calves on a high milk allowance had fewer milk meals over time, whereas calves on a low milk allowance had the same number of milk meals throughout...... milk portions, whereas the other half could ingest the milk in 4 or more daily portions. Data were collected during 3 successive 14-d periods, the first period starting the day after introduction to the feeder at minimum 12 d of age. High-fed calves ingested their milk in 4.0 and 4.9 meals....... Thus, the development from small and frequent milk meals to fewer and larger meals reported by studies of natural suckling was also found among high-fed calves on a computer-controlled milk feeder. Irrespectively of minimum number of milk portions, the low-fed calves had more unrewarded visits...
2010-01-01
Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...
Contributions from I. Fisk
2012-01-01
Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences. Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...
M. Kasemann
Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...
I. Fisk
2013-01-01
Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites. Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month. Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB. Figure 3: The volume of data moved between CMS sites in the last six months The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...
I. Fisk
2012-01-01
Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently. Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...
International Nuclear Information System (INIS)
Ezhil, Muthuveni; Choi, Bum; Starkschall, George; Bucci, M. Kara; Vedam, Sastry; Balter, Peter
2008-01-01
Purpose: To compare three different methods of propagating the gross tumor volume (GTV) through the respiratory phases that constitute a four-dimensional computed tomography image data set. Methods and Materials: Four-dimensional computed tomography data sets of 20 patients who had undergone definitive hypofractionated radiotherapy to the lung were acquired. The GTV regions of interest (ROIs) were manually delineated on each phase of the four-dimensional computed tomography data set. The ROI from the end-expiration phase was propagated to the remaining nine phases of respiration using the following three techniques: (1) rigid-image registration using in-house software, (2) rigid image registration using research software from a commercial radiotherapy planning system vendor, and (3) rigid-image registration followed by deformable adaptation originally intended for organ-at-risk delineation using the same software. The internal GTVs generated from the various propagation methods were compared with the manual internal GTV using the normalized Dice similarity coefficient (DSC) index. Results: The normalized DSC index of 1.01 ± 0.06 (SD) for rigid propagation using the in-house software program was identical to the normalized DSC index of 1.01 ± 0.06 for rigid propagation achieved with the vendor's research software. Adaptive propagation yielded poorer results, with a normalized DSC index of 0.89 ± 0.10 (paired t test, p <0.001). Conclusion: Propagation of the GTV ROIs through the respiratory phases using rigid- body registration is an acceptable method within a 1-mm margin of uncertainty. The adaptive organ-at-risk propagation method was not applicable to propagating GTV ROIs, resulting in an unacceptable reduction of the volume and distortion of the ROIs
Kamburo?lu, K?van?; S?nmez, G?l; Berkta?, Zeynep Serap; Kurt, Hakan; ?zen, Do?ukan
2017-01-01
Purpose The aim of this study was to assess the ex vivo diagnostic ability of 9 different cone-beam computed tomography (CBCT) settings in the detection of recurrent caries under amalgam restorations in primary teeth. Materials and Methods Fifty-two primary teeth were used. Twenty-six teeth had dentine caries and 26 teeth did not have dentine caries. Black class II cavities were prepared and restored with amalgam. In the 26 carious teeth, recurrent caries were left under restorations. The oth...
I. Fisk
2011-01-01
Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...
Energy Technology Data Exchange (ETDEWEB)
Bamberg, Fabian; Abbara, Suhny; Schlett, Christopher L.; Cury, Ricardo C.; Truong, Quynh A.; Rogers, Ian S. [Cardiac MR PET CT Program, Department of Radiology and Division of Cardiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Nagurney, John T. [Department of Emergency Medicine, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Brady, Thomas J. [Cardiac MR PET CT Program, Department of Radiology and Division of Cardiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Hoffmann, Udo [Cardiac MR PET CT Program, Department of Radiology and Division of Cardiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States)], E-mail: uhoffmann@partners.org
2010-04-15
Objective: We aimed to determine predictors of image quality in consecutive patients who underwent coronary computed tomography (CT) for the evaluation of acute chest pain. Method and materials: We prospectively enrolled patients who presented with chest pain to the emergency department. All subjects underwent contrast-enhanced 64-slice coronary multi-detector CT. Two experienced readers determined overall image quality on a per-patient basis and the prevalence and characteristics of non-evaluable coronary segments on a per-segment basis. Results: Among 378 subjects (143 women, age: 52.9 {+-} 11.8 years), 345 (91%) had acceptable overall image quality, while 33 (9%) had poor image quality or were unreadable. In adjusted analysis, patients with diabetes, hypertension and a higher heart rate during the scan were more likely to have exams graded as poor or unreadable (odds ratio [OR]: 2.94, p = 0.02; OR: 2.62, p = 0.03; OR: 1.43, p = 0.02; respectively). Of 6253 coronary segments, 257 (4%) were non-evaluable, most due to severe calcification in combination with motion (35%). The presence of non-evaluable coronary segments was associated with age (OR: 1.08 annually, 95%-confidence interval [CI]: 1.05-1.12, p < 0.001), baseline heart rate (OR: 1.35 per 10 beats/min, 95%-CI: 1.11-1.67, p = 0.003), diabetes, hypertension, and history of coronary artery disease (OR: 4.43, 95%-CI: 1.93-10.17, p < 0.001; OR: 2.27, 95-CI: 1.01-4.73, p = 0.03; OR: 5.12, 95%-CI: 2.0-13.06, p < 0.001; respectively). Conclusion: Coronary CT permits acceptable image quality in more than 90% of patients with chest pain. Patients with multiple risk factors are more likely to have impaired image quality or non-evaluable coronary segments. These patients may require careful patient preparation and optimization of CT scanning protocols.
Energy Technology Data Exchange (ETDEWEB)
Zhang, JY [Cancer Hospital of Shantou University Medical College, Shantou, Guangdong (China); Hong, DL [The First Affiliated Hospital of Shantou University Medical College, Shantou, Guangdong (China)
2016-06-15
Purpose: The purpose of this study is to investigate the patient set-up error and interfraction target coverage in cervical cancer using image-guided adaptive radiotherapy (IGART) with cone-beam computed tomography (CBCT). Methods: Twenty cervical cancer patients undergoing intensity modulated radiotherapy (IMRT) were randomly selected. All patients were matched to the isocenter using laser with the skin markers. Three dimensional CBCT projections were acquired by the Varian Truebeam treatment system. Set-up errors were evaluated by radiation oncologists, after CBCT correction. The clinical target volume (CTV) was delineated on each CBCT, and the planning target volume (PTV) coverage of each CBCT-CTVs was analyzed. Results: A total of 152 CBCT scans were acquired from twenty cervical cancer patients, the mean set-up errors in the longitudinal, vertical, and lateral direction were 3.57, 2.74 and 2.5mm respectively, without CBCT corrections. After corrections, these were decreased to 1.83, 1.44 and 0.97mm. For the target coverage, CBCT-CTV coverage without CBCT correction was 94% (143/152), and 98% (149/152) with correction. Conclusion: Use of CBCT verfication to measure patient setup errors could be applied to improve the treatment accuracy. In addition, the set-up error corrections significantly improve the CTV coverage for cervical cancer patients.
Energy Technology Data Exchange (ETDEWEB)
Kamburoglu, Kivanc; Sonmez, Gul; Kurt, Hakan; Berktas, Zeynep Serap [Dept. of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara (Turkmenistan); Ozen, Dogukan [Dept. of Biostatistics, Faculty of Veterinary Medicine, Ankara University, Ankara (Turkmenistan)
2017-06-15
The aim of this study was to assess the ex vivo diagnostic ability of 9 different cone-beam computed tomography (CBCT) settings in the detection of recurrent caries under amalgam restorations in primary teeth. Fifty-two primary teeth were used. Twenty-six teeth had dentine caries and 26 teeth did not have dentine caries. Black class II cavities were prepared and restored with amalgam. In the 26 carious teeth, recurrent caries were left under restorations. The other 26 intact teeth that did not have caries served as controls. Teeth were imaged using a 100×90-mm field of view and a 0.2-mm voxel size with 9 different CBCT settings. Four observers assessed the images using a 5-point scale. Kappa values were calculated to assess observer agreement. CBCT settings were compared with the gold standard using a receiver operating characteristic analysis. The area under the curve (AUC) values for each setting were compared using the chi-square test, with a significance level of α=.05. Intraobserver kappa values ranged from 0.366 to 0.664 for observer 1, from 0.311 to 0.447 for observer 2, from 0.597 to 1.000 for observer 3, and from 0.869 to 1 for observer 4. Furthermore, interobserver kappa values among the observers ranged from 0.133 to 0.814 for the first reading and from 0.197 to 0.805 for the second reading. The highest AUC values were found for setting 5 (0.5916) and setting 3 (0.5886), and were not found to be statistically significant (P>.05). Variations in tube voltage and tube current did not affect the detection of recurrent caries under amalgam restorations in primary teeth.
Kamburoğlu, Kıvanç; Sönmez, Gül; Berktaş, Zeynep Serap; Kurt, Hakan; Özen, Doĝukan
2017-06-01
The aim of this study was to assess the ex vivo diagnostic ability of 9 different cone-beam computed tomography (CBCT) settings in the detection of recurrent caries under amalgam restorations in primary teeth. Fifty-two primary teeth were used. Twenty-six teeth had dentine caries and 26 teeth did not have dentine caries. Black class II cavities were prepared and restored with amalgam. In the 26 carious teeth, recurrent caries were left under restorations. The other 26 intact teeth that did not have caries served as controls. Teeth were imaged using a 100×90-mm field of view and a 0.2-mm voxel size with 9 different CBCT settings. Four observers assessed the images using a 5-point scale. Kappa values were calculated to assess observer agreement. CBCT settings were compared with the gold standard using a receiver operating characteristic analysis. The area under the curve (AUC) values for each setting were compared using the chi-square test, with a significance level of α=.05. Intraobserver kappa values ranged from 0.366 to 0.664 for observer 1, from 0.311 to 0.447 for observer 2, from 0.597 to 1.000 for observer 3, and from 0.869 to 1 for observer 4. Furthermore, interobserver kappa values among the observers ranged from 0.133 to 0.814 for the first reading and from 0.197 to 0.805 for the second reading. The highest AUC values were found for setting 5 (0.5916) and setting 3 (0.5886), and were not found to be statistically significant ( P >.05). Variations in tube voltage and tube current did not affect the detection of recurrent caries under amalgam restorations in primary teeth.
International Nuclear Information System (INIS)
Kamburoglu, Kivanc; Sonmez, Gul; Kurt, Hakan; Berktas, Zeynep Serap; Ozen, Dogukan
2017-01-01
The aim of this study was to assess the ex vivo diagnostic ability of 9 different cone-beam computed tomography (CBCT) settings in the detection of recurrent caries under amalgam restorations in primary teeth. Fifty-two primary teeth were used. Twenty-six teeth had dentine caries and 26 teeth did not have dentine caries. Black class II cavities were prepared and restored with amalgam. In the 26 carious teeth, recurrent caries were left under restorations. The other 26 intact teeth that did not have caries served as controls. Teeth were imaged using a 100×90-mm field of view and a 0.2-mm voxel size with 9 different CBCT settings. Four observers assessed the images using a 5-point scale. Kappa values were calculated to assess observer agreement. CBCT settings were compared with the gold standard using a receiver operating characteristic analysis. The area under the curve (AUC) values for each setting were compared using the chi-square test, with a significance level of α=.05. Intraobserver kappa values ranged from 0.366 to 0.664 for observer 1, from 0.311 to 0.447 for observer 2, from 0.597 to 1.000 for observer 3, and from 0.869 to 1 for observer 4. Furthermore, interobserver kappa values among the observers ranged from 0.133 to 0.814 for the first reading and from 0.197 to 0.805 for the second reading. The highest AUC values were found for setting 5 (0.5916) and setting 3 (0.5886), and were not found to be statistically significant (P>.05). Variations in tube voltage and tube current did not affect the detection of recurrent caries under amalgam restorations in primary teeth
M. Kasemann
CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...
Tangari-Meira, Ricardo; Vancetto, José Ricardo; Dovigo, Lívia Nordi; Tosoni, Guilherme Monteiro
2017-10-01
This study assessed the influence of tube current settings (milliamperes [mA]) on the diagnostic detection of root fractures (RFs) using cone-beam computed tomographic (CBCT) imaging. Sixty-eight human anterior and posterior teeth were submitted to root canal preparation, and 34 root canals were filled. The teeth were divided into 2 groups: the control group and the fractured group. RFs were induced using a universal mechanical testing machine; afterward, the teeth were placed in a phantom. Images were acquired using a Scanora 3DX unit (Soredex, Tuusula, Finland) with 5 different mA settings: 4.0, 5.0, 6.3, 8.0, and 10.0. Two examiners (E1 and E2) classified the images according to a 5-point confidence scale. Intra- and interexaminer reproducibility was assessed using the kappa statistic; diagnostic performance was assessed using the area under the receiver operating characteristic curve (AUROC). Intra- and interexaminer reproducibility showed substantial (κE1 = 0.791 and κE2 = 0.695) and moderate (κE1 × E2 = 0.545) agreement, respectively. AUROC was significantly higher (P ≤ .0389) at 8.0 and 10.0 mA and showed no statistical difference between the 2 tube current settings. Tube current has a significant influence on the diagnostic detection of RFs in CBCT images. Despite the acceptable diagnosis of RFs using 4.0 and 5.0 mA, those settings had lower discrimination abilities when compared with settings of 8.0 and 10.0 mA. Copyright © 2017 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Simulation and Verification of Synchronous Set Relations in Rewriting Logic
Rocha, Camilo; Munoz, Cesar A.
2011-01-01
This paper presents a mathematical foundation and a rewriting logic infrastructure for the execution and property veri cation of synchronous set relations. The mathematical foundation is given in the language of abstract set relations. The infrastructure consists of an ordersorted rewrite theory in Maude, a rewriting logic system, that enables the synchronous execution of a set relation provided by the user. By using the infrastructure, existing algorithm veri cation techniques already available in Maude for traditional asynchronous rewriting, such as reachability analysis and model checking, are automatically available to synchronous set rewriting. The use of the infrastructure is illustrated with an executable operational semantics of a simple synchronous language and the veri cation of temporal properties of a synchronous system.
Khaligh-Razavi, Seyed-Mahdi; Henriksson, Linda; Kay, Kendrick; Kriegeskorte, Nikolaus
2017-02-01
Studies of the primate visual system have begun to test a wide range of complex computational object-vision models. Realistic models have many parameters, which in practice cannot be fitted using the limited amounts of brain-activity data typically available. Task performance optimization (e.g. using backpropagation to train neural networks) provides major constraints for fitting parameters and discovering nonlinear representational features appropriate for the task (e.g. object classification). Model representations can be compared to brain representations in terms of the representational dissimilarities they predict for an image set. This method, called representational similarity analysis (RSA), enables us to test the representational feature space as is (fixed RSA) or to fit a linear transformation that mixes the nonlinear model features so as to best explain a cortical area's representational space (mixed RSA). Like voxel/population-receptive-field modelling, mixed RSA uses a training set (different stimuli) to fit one weight per model feature and response channel (voxels here), so as to best predict the response profile across images for each response channel. We analysed response patterns elicited by natural images, which were measured with functional magnetic resonance imaging (fMRI). We found that early visual areas were best accounted for by shallow models, such as a Gabor wavelet pyramid (GWP). The GWP model performed similarly with and without mixing, suggesting that the original features already approximated the representational space, obviating the need for mixing. However, a higher ventral-stream visual representation (lateral occipital region) was best explained by the higher layers of a deep convolutional network and mixing of its feature set was essential for this model to explain the representation. We suspect that mixing was essential because the convolutional network had been trained to discriminate a set of 1000 categories, whose frequencies
Energy Technology Data Exchange (ETDEWEB)
Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi [Department of Radiology, University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637 (United States)
2010-05-15
Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F{<=}f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less
International Nuclear Information System (INIS)
Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi
2010-01-01
Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F≤f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less completion time
Siuly; Li, Yan; Paul Wen, Peng
2014-03-01
Motor imagery (MI) tasks classification provides an important basis for designing brain-computer interface (BCI) systems. If the MI tasks are reliably distinguished through identifying typical patterns in electroencephalography (EEG) data, a motor disabled people could communicate with a device by composing sequences of these mental states. In our earlier study, we developed a cross-correlation based logistic regression (CC-LR) algorithm for the classification of MI tasks for BCI applications, but its performance was not satisfactory. This study develops a modified version of the CC-LR algorithm exploring a suitable feature set that can improve the performance. The modified CC-LR algorithm uses the C3 electrode channel (in the international 10-20 system) as a reference channel for the cross-correlation (CC) technique and applies three diverse feature sets separately, as the input to the logistic regression (LR) classifier. The present algorithm investigates which feature set is the best to characterize the distribution of MI tasks based EEG data. This study also provides an insight into how to select a reference channel for the CC technique with EEG signals considering the anatomical structure of the human brain. The proposed algorithm is compared with eight of the most recently reported well-known methods including the BCI III Winner algorithm. The findings of this study indicate that the modified CC-LR algorithm has potential to improve the identification performance of MI tasks in BCI systems. The results demonstrate that the proposed technique provides a classification improvement over the existing methods tested. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Heredia-López, Francisco J; Álvarez-Cervera, Fernando J; Collí-Alfaro, José G; Bata-García, José L; Arankowsky-Sandoval, Gloria; Góngora-Alfaro, José L
2016-12-01
Continuous spontaneous alternation behavior (SAB) in a Y-maze is used for evaluating working memory in rodents. Here, the design of an automated Y-maze equipped with three infrared optocouplers per arm, and commanded by a reduced instruction set computer (RISC) microcontroller is described. The software was devised for recording only true entries and exits to the arms. Experimental settings are programmed via a keyboard with three buttons and a display. The sequence of arm entries and the time spent in each arm and the neutral zone (NZ) are saved as a text file in a non-volatile memory for later transfer to a USB flash memory. Data files are analyzed with a program developed under LabVIEW® environment, and the results are exported to an Excel® spreadsheet file. Variables measured are: latency to exit the starting arm, sequence and number of arm entries, number of alternations, alternation percentage, and cumulative times spent in each arm and NZ. The automated Y-maze accurately detected the SAB decrease produced in rats by the muscarinic antagonist trihexyphenidyl, and its reversal by caffeine, having 100 % concordance with the alternation percentages calculated by two trained observers who independently watched videos of the same experiments. Although the values of time spent in the arms and NZ measured by the automated system had small discrepancies with those calculated by the observers, Bland-Altman analysis showed 95 % concordance in three pairs of comparisons, while in one it was 90 %, indicating that this system is a reliable and inexpensive alternative for the study of continuous SAB in rodents.
Revell, A D; Wang, D; Wood, R; Morrow, C; Tempelman, H; Hamers, R L; Alvarez-Uria, G; Streinu-Cercel, A; Ene, L; Wensing, A M J; DeWolf, F; Nelson, M; Montaner, J S; Lane, H C; Larder, B A
2013-06-01
Genotypic HIV drug-resistance testing is typically 60%-65% predictive of response to combination antiretroviral therapy (ART) and is valuable for guiding treatment changes. Genotyping is unavailable in many resource-limited settings (RLSs). We aimed to develop models that can predict response to ART without a genotype and evaluated their potential as a treatment support tool in RLSs. Random forest models were trained to predict the probability of response to ART (≤400 copies HIV RNA/mL) using the following data from 14 891 treatment change episodes (TCEs) after virological failure, from well-resourced countries: viral load and CD4 count prior to treatment change, treatment history, drugs in the new regimen, time to follow-up and follow-up viral load. Models were assessed by cross-validation during development, with an independent set of 800 cases from well-resourced countries, plus 231 cases from Southern Africa, 206 from India and 375 from Romania. The area under the receiver operating characteristic curve (AUC) was the main outcome measure. The models achieved an AUC of 0.74-0.81 during cross-validation and 0.76-0.77 with the 800 test TCEs. They achieved AUCs of 0.58-0.65 (Southern Africa), 0.63 (India) and 0.70 (Romania). Models were more accurate for data from the well-resourced countries than for cases from Southern Africa and India (P < 0.001), but not Romania. The models identified alternative, available drug regimens predicted to result in virological response for 94% of virological failures in Southern Africa, 99% of those in India and 93% of those in Romania. We developed computational models that predict virological response to ART without a genotype with comparable accuracy to genotyping with rule-based interpretation. These models have the potential to help optimize antiretroviral therapy for patients in RLSs where genotyping is not generally available.
International Nuclear Information System (INIS)
Vasil'kova, A.D.; Grachev, S.N.; Salakatova, L.S.
1987-01-01
A special computational control complex (SCCC) including the ''Elektronika-60M'' microcomputer, a device for communication with an object (DCO) based on the typical microprocessing set (TMS) based on LIUS-2 KTS means and an adapter for communication of the ''Elektronika-60M'' microcomputer bus with IK1 TMS bus developed for their conjugation is used for development of nondestructive control post software. An instrumental SCCC including SM-4 microcomputer, TMS units as DCO, an adapter for communication of the common bus of the SM-4 minicomputer with the IK1 TMS bus developed for their conjugation and devices for simulation of the facility operation is suggested to increase labour productivity of programmers. The SM-4 microcomputer permits to develop programs in the FORTRAN language and to carry out their effective adjustment. A subprogram library for communication with TMS units is compiled, and the technique for testing FORTRAN programs in programmable read-only memory TMS as well as start of these programs are developed
Energy Technology Data Exchange (ETDEWEB)
Matsui, Yusuke, E-mail: wckyh140@yahoo.co.jp; Hiraki, Takao, E-mail: takaoh@tc4.so-net.ne.jp; Gobara, Hideo, E-mail: gobara@cc.okayama-u.ac.jp; Iguchi, Toshihiro, E-mail: i10476@yahoo.co.jp; Fujiwara, Hiroyasu, E-mail: hirofujiwar@gmail.com; Kawabata, Takahiro, E-mail: tkhr-kwbt@yahoo.co.jp [Okayama University Medical School, Department of Radiology (Japan); Yamauchi, Takatsugu, E-mail: me9248@hp.okayama-u.ac.jp; Yamaguchi, Takuya, E-mail: me8738@hp.okayama-u.ac.jp [Okayama University Hospital, Central Division of Radiology (Japan); Kanazawa, Susumu, E-mail: susumu@cc.okayama-u.ac.jp [Okayama University Medical School, Department of Radiology (Japan)
2016-06-15
IntroductionComputed tomography (CT) fluoroscopy-guided renal cryoablation and lung radiofrequency ablation (RFA) have received increasing attention as promising cancer therapies. Although radiation exposure of interventional radiologists during these procedures is an important concern, data on operator exposure are lacking.Materials and MethodsRadiation dose to interventional radiologists during CT fluoroscopy-guided renal cryoablation (n = 20) and lung RFA (n = 20) was measured prospectively in a clinical setting. Effective dose to the operator was calculated from the 1-cm dose equivalent measured on the neck outside the lead apron, and on the left chest inside the lead apron, using electronic dosimeters. Equivalent dose to the operator’s finger skin was measured using thermoluminescent dosimeter rings.ResultsThe mean (median) effective dose to the operator per procedure was 6.05 (4.52) μSv during renal cryoablation and 0.74 (0.55) μSv during lung RFA. The mean (median) equivalent dose to the operator’s finger skin per procedure was 2.1 (2.1) mSv during renal cryoablation, and 0.3 (0.3) mSv during lung RFA.ConclusionRadiation dose to interventional radiologists during renal cryoablation and lung RFA were at an acceptable level, and in line with recommended dose limits for occupational radiation exposure.
Matsui, Yusuke; Hiraki, Takao; Gobara, Hideo; Iguchi, Toshihiro; Fujiwara, Hiroyasu; Kawabata, Takahiro; Yamauchi, Takatsugu; Yamaguchi, Takuya; Kanazawa, Susumu
2016-06-01
Computed tomography (CT) fluoroscopy-guided renal cryoablation and lung radiofrequency ablation (RFA) have received increasing attention as promising cancer therapies. Although radiation exposure of interventional radiologists during these procedures is an important concern, data on operator exposure are lacking. Radiation dose to interventional radiologists during CT fluoroscopy-guided renal cryoablation (n = 20) and lung RFA (n = 20) was measured prospectively in a clinical setting. Effective dose to the operator was calculated from the 1-cm dose equivalent measured on the neck outside the lead apron, and on the left chest inside the lead apron, using electronic dosimeters. Equivalent dose to the operator's finger skin was measured using thermoluminescent dosimeter rings. The mean (median) effective dose to the operator per procedure was 6.05 (4.52) μSv during renal cryoablation and 0.74 (0.55) μSv during lung RFA. The mean (median) equivalent dose to the operator's finger skin per procedure was 2.1 (2.1) mSv during renal cryoablation, and 0.3 (0.3) mSv during lung RFA. Radiation dose to interventional radiologists during renal cryoablation and lung RFA were at an acceptable level, and in line with recommended dose limits for occupational radiation exposure.
Directory of Open Access Journals (Sweden)
Kevin Ten Haaf
2017-02-01
Full Text Available The National Lung Screening Trial (NLST results indicate that computed tomography (CT lung cancer screening for current and former smokers with three annual screens can be cost-effective in a trial setting. However, the cost-effectiveness in a population-based setting with >3 screening rounds is uncertain. Therefore, the objective of this study was to estimate the cost-effectiveness of lung cancer screening in a population-based setting in Ontario, Canada, and evaluate the effects of screening eligibility criteria.This study used microsimulation modeling informed by various data sources, including the Ontario Health Insurance Plan (OHIP, Ontario Cancer Registry, smoking behavior surveys, and the NLST. Persons, born between 1940 and 1969, were examined from a third-party health care payer perspective across a lifetime horizon. Starting in 2015, 576 CT screening scenarios were examined, varying by age to start and end screening, smoking eligibility criteria, and screening interval. Among the examined outcome measures were lung cancer deaths averted, life-years gained, percentage ever screened, costs (in 2015 Canadian dollars, and overdiagnosis. The results of the base-case analysis indicated that annual screening was more cost-effective than biennial screening. Scenarios with eligibility criteria that required as few as 20 pack-years were dominated by scenarios that required higher numbers of accumulated pack-years. In general, scenarios that applied stringent smoking eligibility criteria (i.e., requiring higher levels of accumulated smoking exposure were more cost-effective than scenarios with less stringent smoking eligibility criteria, with modest differences in life-years gained. Annual screening between ages 55-75 for persons who smoked ≥40 pack-years and who currently smoke or quit ≤10 y ago yielded an incremental cost-effectiveness ratio of $41,136 Canadian dollars ($33,825 in May 1, 2015, United States dollars per life-year gained
Durmus, Tahir; Luhur, Reny; Daqqaq, Tareef; Schwenke, Carsten; Knobloch, Gesine; Huppertz, Alexander; Hamm, Bernd; Lembcke, Alexander
2016-05-01
To evaluate a software tool that claims to maintain a constant contrast-to-noise ratio (CNR) in high-pitch dual-source computed tomography coronary angiography (CTCA) by automatically selecting both X-ray tube voltage and current. A total of 302 patients (171 males; age 61±12years; body weight 82±17kg, body mass index 27.3±4.6kg/cm(2)) underwent CTCA with a topogram-based, automatic selection of both tube voltage and current using dedicated software with quality reference values of 100kV and 250mAs/rotation (i.e., standard values for an average adult weighing 75kg) and an injected iodine load of 222mg/kg. The average radiation dose was estimated to be 1.02±0.64mSv. All data sets had adequate contrast enhancement. Average CNR in the aortic root, left ventricle, and left and right coronary artery was 15.7±4.5, 8.3±2.9, 16.1±4.3 and 15.3±3.9 respectively. Individual CNR values were independent of patients' body size and radiation dose. However, individual CNR values may vary considerably between subjects as reflected by interquartile ranges of 12.6-18.6, 6.2-9.9, 12.8-18.9 and 12.5-17.9 respectively. Moreover, average CNR values were significantly lower in males than females (15.1±4.1 vs. 16.6±11.7 and 7.9±2.7 vs. 8.9±3.0, 15.5±3.9 vs. 16.9±4.6 and 14.7±3.6 vs. 16.0±4.1 respectively). A topogram-based automatic selection of X-ray tube settings in CTCA provides diagnostic image quality independent of patients' body size. Nevertheless, considerable variation of individual CNR values between patients and significant differences of CNR values between males and females occur which questions the reliability of this approach. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Reachability Analysis of Probabilistic Systems
DEFF Research Database (Denmark)
D'Argenio, P. R.; Jeanett, B.; Jensen, Henrik Ejersbo
2001-01-01
than the original model, and may safely refute or accept the required property. Otherwise, the abstraction is refined and the process repeated. As the numerical analysis involved in settling the validity of the property is more costly than the refinement process, the method profits from applying...... such numerical analysis on smaller state spaces. The method is significantly enhanced by a number of novel strategies: a strategy for reducing the size of the numerical problems to be analyzed by identification of so-called {essential states}, and heuristic strategies for guiding the refinement process....
Shi, Zhenzhen; Chapes, Stephen K; Ben-Arieh, David; Wu, Chih-Hang
2016-01-01
We present an agent-based model (ABM) to simulate a hepatic inflammatory response (HIR) in a mouse infected by Salmonella that sometimes progressed to problematic proportions, known as "sepsis". Based on over 200 published studies, this ABM describes interactions among 21 cells or cytokines and incorporates 226 experimental data sets and/or data estimates from those reports to simulate a mouse HIR in silico. Our simulated results reproduced dynamic patterns of HIR reported in the literature. As shown in vivo, our model also demonstrated that sepsis was highly related to the initial Salmonella dose and the presence of components of the adaptive immune system. We determined that high mobility group box-1, C-reactive protein, and the interleukin-10: tumor necrosis factor-α ratio, and CD4+ T cell: CD8+ T cell ratio, all recognized as biomarkers during HIR, significantly correlated with outcomes of HIR. During therapy-directed silico simulations, our results demonstrated that anti-agent intervention impacted the survival rates of septic individuals in a time-dependent manner. By specifying the infected species, source of infection, and site of infection, this ABM enabled us to reproduce the kinetics of several essential indicators during a HIR, observe distinct dynamic patterns that are manifested during HIR, and allowed us to test proposed therapy-directed treatments. Although limitation still exists, this ABM is a step forward because it links underlying biological processes to computational simulation and was validated through a series of comparisons between the simulated results and experimental studies.
Zoubeir, Wassim Fouad
This research explored the effects of a constructivist approach using computer projected simulations (CPS) and interactive engagement (IE) methods on 12th grade school students. The treatment lasted 18 weeks during the 1999-2000 fall semester and seeked to evaluate three variations in students': (1)conceptual understanding of Newtonian mechanics as measured by the Force Concept Inventory (FCI), (2)modification of their views about science as measured by the Views About Science Survey (VASS), and (3)achievement on traditional examinations, as measured by their end of semester grades. Analysis of Covariance (ANCOVA) was applied to determine the differences between the mean scores of the experimental group students, and students of the control group, who were exposed to traditional teaching methods only. The FCI data analysis showed that, after 18 weeks, conceptual understanding of Newtonian mechanics had markedly improved only in the experimental group (F(1,99) = 44.739, p performance on the VASS instrument for both groups (F(1,99) = .033, p = .856), confirming previous and comparable findings for studies of short implementation period. The lack of statistically significant difference between the control and experimental groups in graded achievement, while controlling for students' previous achievement, was unexpected (F(1,99) = 1.178, p = .280). It is suggested that in this particular setting, the influence of a technical factor may have been overlooked: the monitored and systematic drill exercises using elaborate math formulae to prepare students for traditional math-loaded exams. Still, despite being intentionally deprived of such preparation throughout the study, students of the experimental group did not achieve less than their counterpart, and in addition, they had gained a satisfactory understanding of Newtonian mechanics. This result points unmistakably at a plausible positive correlation between a better grasp of basic concepts in physics in a challenging
International Nuclear Information System (INIS)
Yadav, Anju; Rani, Mamta
2015-01-01
Alternate Julia sets have been studied in Picard iterative procedures. The purpose of this paper is to study the quadratic and cubic maps using superior iterates to obtain Julia sets with different alternate structures. Analytically, graphically and computationally it has been shown that alternate superior Julia sets can be connected, disconnected and totally disconnected, and also fattier than the corresponding alternate Julia sets. A few examples have been studied by applying different type of alternate structures
Decomposition and Simplification of Multivariate Data using Pareto Sets.
Huettenberger, Lars; Heine, Christian; Garth, Christoph
2014-12-01
Topological and structural analysis of multivariate data is aimed at improving the understanding and usage of such data through identification of intrinsic features and structural relationships among multiple variables. We present two novel methods for simplifying so-called Pareto sets that describe such structural relationships. Such simplification is a precondition for meaningful visualization of structurally rich or noisy data. As a framework for simplification operations, we introduce a decomposition of the data domain into regions of equivalent structural behavior and the reachability graph that describes global connectivity of Pareto extrema. Simplification is then performed as a sequence of edge collapses in this graph; to determine a suitable sequence of such operations, we describe and utilize a comparison measure that reflects the changes to the data that each operation represents. We demonstrate and evaluate our methods on synthetic and real-world examples.
Directory of Open Access Journals (Sweden)
Ye Hu
2014-02-01
Full Text Available Matched molecular pairs (MMPs are widely used in medicinal chemistry to study changes in compound properties including biological activity, which are associated with well-defined structural modifications. Herein we describe up-to-date versions of three MMP-based data sets that have originated from in-house research projects. These data sets include activity cliffs, structure-activity relationship (SAR transfer series, and second generation MMPs based upon retrosynthetic rules. The data sets have in common that they have been derived from compounds included in the latest release of the ChEMBL database for which high-confidence activity data are available. Thus, the activity data associated with MMP-based activity cliffs, SAR transfer series, and retrosynthetic MMPs cover the entire spectrum of current pharmaceutical targets. Our data sets are made freely available to the scientific community.
Boudreau, Francois; Godin, Gaston; Poirier, Paul
2011-01-01
The promotion of regular physical activity for people with type 2 diabetes poses a challenge for public health authorities. The purpose of this study was to evaluate the efficiency of a computer-tailoring print-based intervention to promote the adoption of regular physical activity among people with type 2 diabetes. An experimental design was…
International Nuclear Information System (INIS)
Suyama, Kenya; Komuro, Yuichi; Takada, Tomoyuki; Kawasaki, Hiromitsu; Ouchi, Keisuke
1998-02-01
This report is a user's manual of the computer program MAIL3.1 which generates various types of cross section sets for neutron transport programs such as SIMCRI, ANISN-JR, KENO IV, KENO V, MULTI-KENO, MULTI-KENO-2 and MULTI-KENO-3.0. MAIL3.1 is a revised version of MAIL3.0 that was opened in 1990. It has all of abilities of MAIL3.0 and has two more functions as shown in following. 1. AMPX-type cross section set generating function for KENO V. 2. Enhanced function for user of 16 group Hansen-Roach library. (author)
Hickethier, Tilman; Iuga, Andra-Iza; Lennartz, Simon; Hauger, Myriam; Byrtus, Jonathan; Luetkens, Julian A; Haneder, Stefan; Maintz, David; Doerner, Jonas
We aimed to determine optimal window settings for conventional polyenergetic (PolyE) and virtual monoenergetic images (MonoE) derived from abdominal portal venous phase computed tomography (CT) examinations on a novel dual-layer spectral-detector CT (SDCT). From 50 patients, SDCT data sets MonoE at 40 kiloelectron volt as well as PolyE were reconstructed and best individual window width and level values manually were assessed separately for evaluation of abdominal arteries as well as for liver lesions. Via regression analysis, optimized individual values were mathematically calculated. Subjective image quality parameters, vessel, and liver lesion diameters were measured to determine influences of different W/L settings. Attenuation and contrast-to-noise values were significantly higher in MonoE compared with PolyE. Compared with standard settings, almost all adjusted W/L settings varied significantly and yielded higher subjective scoring. No differences were found between manually adjusted and mathematically calculated W/L settings. PolyE and MonoE from abdominal portal venous phase SDCT examinations require appropriate W/L settings depending on reconstruction technique and assessment focus.
International Nuclear Information System (INIS)
D'Angelo, Tommaso; ''G. Martino'' University Hospital, Messina; Bucher, Andreas M.; Lenga, Lukas; Arendt, Christophe T.; Peterke, Julia L.; Martin, Simon S.; Leithner, Doris; Vogl, Thomas J.; Wichmann, Julian L.; Caruso, Damiano; University Hospital, Latina; Mazziotti, Silvio; Blandino, Alfredo; Ascenti, Giorgio; University Hospital, Messina; Othman, Ahmed E.
2018-01-01
To define optimal window settings for displaying virtual monoenergetic images (VMI) of dual-energy CT pulmonary angiography (DE-CTPA). Forty-five patients who underwent clinically-indicated third-generation dual-source DE-CTPA were retrospectively evaluated. Standard linearly-blended (M 0 .6), 70-keV traditional VMI (M70), and 40-keV noise-optimised VMI (M40+) reconstructions were analysed. For M70 and M40+ datasets, the subjectively best window setting (width and level, B-W/L) was independently determined by two observers and subsequently related with pulmonary artery attenuation to calculate separate optimised values (O-W/L) using linear regression. Subjective evaluation of image quality (IQ) between W/L settings were assessed by two additional readers. Repeated measures of variance were performed to compare W/L settings and IQ indices between M 0 .6, M70, and M40+. B-W/L and O-W/L for M70 were 460/140 and 450/140, and were 1100/380 and 1070/380 for M40+, respectively, differing from standard DE-CTPA W/L settings (450/100). Highest subjective scores were observed for M40+ regarding vascular contrast, embolism demarcation, and overall IQ (all p<0.001). Application of O-W/L settings is beneficial to optimise subjective IQ of VMI reconstructions of DE-CTPA. A width slightly less than two times the pulmonary trunk attenuation and a level approximately of overall pulmonary vessel attenuation are recommended. (orig.)
Revell, A. D.; Wang, D.; Wood, R.; Morrow, C.; Tempelman, H.; Hamers, R. L.; Alvarez-Uria, G.; Streinu-Cercel, A.; Ene, L.; Wensing, A. M. J.; DeWolf, F.; Nelson, M.; Montaner, J. S.; Lane, H. C.; Larder, B. A.
2013-01-01
Genotypic HIV drug-resistance testing is typically 6065 predictive of response to combination antiretroviral therapy (ART) and is valuable for guiding treatment changes. Genotyping is unavailable in many resource-limited settings (RLSs). We aimed to develop models that can predict response to ART
Harman, Nate
2016-01-01
We consider the following counting problem related to the card game SET: How many $k$-element SET-free sets are there in an $n$-dimensional SET deck? Through a series of algebraic reformulations and reinterpretations, we show the answer to this question satisfies two polynomiality conditions.
Quigley, Mark Declan
The purpose of this researcher was to examine specific environmental, educational, and demographic factors and their influence on mathematics and science achievement. In particular, the researcher ascertained the interconnections of home computer access and social capital, with Asian American students and the effect on mathematics and science achievement. Coleman's theory on social capital and parental influence was used as a basis for the analysis of data. Subjects for this study were the base year students from the National Education Longitudinal Study of 1988 (NELS:88) and the subsequent follow-up survey data in 1990, 1992, and 1994. The approximate sample size for this study is 640 ethnic Asians from the NELS:88 database. The analysis was a longitudinal study based on the Student and Parent Base Year responses and the Second Follow-up survey of 1992, when the subjects were in 12th grade. Achievement test results from the NELS:88 data were used to measure achievement in mathematics and science. The NELS:88 test battery was developed to measure both individual status and a student's growth in a number of achievement areas. The subject's responses were analyzed by principal components factor analysis, weights, effect sizes, hierarchial regression analysis, and PLSPath Analysis. The results of this study were that prior ability in mathematics and science is a major influence in the student's educational achievement. Findings from the study support the view that home computer access has a negative direct effect on mathematics and science achievement for both Asian American males and females. None of the social capital factors in the study had either a negative or positive direct effect on mathematics and science achievement although some indirect effects were found. Suggestions were made toward increasing parental involvement in their children's academic endeavors. Computer access in the home should be considered related to television viewing and should be closely
International Nuclear Information System (INIS)
Schwob, N.; Schwob, W.; Loewenthal, E.
1999-01-01
Film dosimetry has the advantage over other dosimetry methods, of having a high spatial resolution and a fast two dimensional data acquisition. We have set up a system using a film digitizer with its associated software, dedicated to radiosurgery and we have developed data processing programs in Visual Basic for Excel. Data acquisition is not limited to water equivalent media: correction factors can be provided in the data processing procedure
International Nuclear Information System (INIS)
Peters, Sinead E.; Brennan, Patrick C.
2002-01-01
Manufacturers offer exposure indices as a safeguard against overexposure in computed radiography, but the basis for recommended values is unclear. This study establishes an optimum exposure index to be used as a guideline for a specific CR system to minimise radiation exposures for computed mobile chest radiography, and compares this with manufacturer guidelines and current practice. An anthropomorphic phantom was employed to establish the minimum milliamperes consistent with acceptable image quality for mobile chest radiography images. This was found to be 2 mAs. Consecutively, 10 patients were exposed with this optimised milliampere value and 10 patients were exposed with the 3.2 mAs routinely used in the department of the study. Image quality was objectively assessed using anatomical criteria. Retrospective analyses of 717 exposure indices recorded over 2 months from mobile chest examinations were performed. The optimised milliampere value provided a significant reduction of the average exposure index from 1840 to 1570 (p<0.0001). This new ''optimum'' exposure index is substantially lower than manufacturer guidelines of 2000 and significantly lower than exposure indices from the retrospective study (1890). Retrospective data showed a significant increase in exposure indices if the examination was performed out of hours. The data provided by this study emphasise the need for clinicians and personnel to consider establishing their own optimum exposure indices for digital investigations rather than simply accepting manufacturers' guidelines. Such an approach, along with regular monitoring of indices, may result in a substantial reduction in patient exposure. (orig.)
International Nuclear Information System (INIS)
Sumner, H.M.
1965-11-01
FIFI 3 is a FORTRAN Code embodying a technique for the analysis of process plant dynamics. As such, it is essentially a tool for the integration of sets of first order ordinary differential equations, either linear or non-linear; special provision is made for the inclusion of time-delayed variables in the mathematical model of the plant. The method of integration is new and is centred on a stable multistep predictor-corrector algorithm devised by the late Mr. F.G. Chapman, of the UKAEA, Winfrith. The theory on which the Code is based and detailed rules for using it are described in Parts I and II respectively. (author)
Sekiguchi, Katsuo; Ushitani, Tomokazu; Sawa, Kosuke
2018-05-01
Landmark-based goal-searching tasks that were similar to those for pigeons (Ushitani & Jitsumori, 2011) were provided to human participants to investigate whether they could learn and use multiple sources of spatial information that redundantly indicate the position of a hidden target in both an open field (Experiment 1) and on a computer screen (Experiments 2 and 3). During the training in each experiment, participants learned to locate a target in 1 of 25 objects arranged in a 5 × 5 grid, using two differently colored, arrow-shaped (Experiments 1 and 2) or asymmetrically shaped (Experiment 3) landmarks placed adjacent to the goal and pointing to the goal location. The absolute location and directions of the landmarks varied across trials, but the constant configuration of the goal and the landmarks enabled participants to find the goal using both global configural information and local vector information (pointing to the goal by each individual landmark). On subsequent test trials, the direction was changed for one of the landmarks to conflict with the global configural information. Results of Experiment 1 indicated that participants used vector information from a single landmark but not configural information. Further examinations revealed that the use of global (metric) information was enhanced remarkably by goal searching with nonarrow-shaped landmarks on the computer monitor (Experiment 3) but much less so with arrow-shaped landmarks (Experiment 2). The General Discussion focuses on a comparison between humans in the current study and pigeons in the previous study. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Automatic sets and Delone sets
International Nuclear Information System (INIS)
Barbe, A; Haeseler, F von
2004-01-01
Automatic sets D part of Z m are characterized by having a finite number of decimations. They are equivalently generated by fixed points of certain substitution systems, or by certain finite automata. As examples, two-dimensional versions of the Thue-Morse, Baum-Sweet, Rudin-Shapiro and paperfolding sequences are presented. We give a necessary and sufficient condition for an automatic set D part of Z m to be a Delone set in R m . The result is then extended to automatic sets that are defined as fixed points of certain substitutions. The morphology of automatic sets is discussed by means of examples
International Nuclear Information System (INIS)
Zobor, E.
1978-12-01
The approach chosen is based on the hierarchical control systems theory, however, the fundamentals of other approaches such as the systems simplification and systems partitioning are briefly summarized for introducing the problems associated with the control of large scale systems. The concept of a hierarchical control system acting in broad variety of operating conditions is developed and some practical extensions to the hierarchical control system approach e.g. subsystems measured and controlled with different rates, control of the partial state vector, coordination for autoregressive models etc. are given. Throughout the work the WWR-SM research reactor of the Institute has been taken as a guiding example and simple methods for the identification of the model parameters from a reactor start-up are discussed. Using the PROHYS digital simulation program elaborated in the course of the present research, detailed simulation studies were carried out for investigating the performance of a control system based on the concept and algorithms developed. In order to give a real application evidence, a short description is finally given about the closed-loop computer control system installed - in the framework of a project supported by the Hungarian State Office for Technical Development - at the WWR-SM research reactor where the results obtained in the present IAEA Research Contract were successfully applied and furnished the expected high performance
Energy Technology Data Exchange (ETDEWEB)
Wadlinger, E.A.
1980-03-01
A computer program that will fit a hyperellipse to a set of phase-space points in as many as 6 dimensions was written and tested. The weight assigned to the phase-space points can be varied as a function of their distance from the centroid of the distribution. Varying the weight enables determination of whether there is a difference in ellipse orientation between inner and outer particles. This program should be useful in studying the effects of longitudinal and transverse phase-space couplings.
International Nuclear Information System (INIS)
Marten, Katharina; Grillhoesl, Andreas; Seyfarth, Tobias; Rummeny, Ernst J.; Engelke, Christoph; Obenauer, Silvia
2005-01-01
The purpose of this study was to evaluate the performance of a computer-assisted diagnostic (CAD) tool using various reconstruction slice thicknesses (RST). Image data of 20 patients undergoing multislice CT for pulmonary metastasis were reconstructed at 4.0, 2.0 and 0.75 mm RST and assessed by two blinded radiologists (R1 and R2) and CAD. Data were compared against an independent reference standard. Nodule subgroups (diameter >10, 4-10, <4 mm) were assessed separately. Statistical methods were the ROC analysis and Mann-Whitney Utest. CAD was outperformed by readers at 4.0 mm (Az = 0.18, 0.62 and 0.69 for CAD, R1 and R2, respectively; P<0.05), comparable at 2.0 mm (Az = 0.57, 0.70 and 0.69 for CAD, R1 and R2, respectively), and superior using 0.75 mm RST (Az = 0.80, 0.70 and 0.70 and sensitivity = 0.74, 0.53 and 0.53 for CAD, R1 and R2, respectively; P<0.05). Reader performances were significantly enhanced by CAD (Az = 0.93 and 0.95 for R1 + CAD and R2 + CAD, respectively, P<0.05). The CAD advantage was best for nodules <10 mm (detection rates = 93.3, 89.9, 47.9 and 47.9% for R1 + CAD, R2 + CAD, R1 and R2, respectively). CAD using 0.75 mm RST outperformed radiologists in nodules below 10 mm in diameter and should be used to replace a second radiologist. CAD is not recommended for 4.0 mm RST. (orig.)
International Nuclear Information System (INIS)
Ameaume, A.
1979-01-01
The study of inelastic interactions between symmetrical systems allows to calculate the number of nucleons lost by evaporation; the identity of initial particles, within a two body hypothesis in the exit channel, implies a symmetry in the charge and mass distributions of the final nuclei. But these nuclei, in the first stage of reaction, are emitted with some excitation energy, so the observer detects only the cold products resulting from their de-excitation by evaporation. The study of the 40 Ca + 40 Ca system at 400 MeV, with an E-ΔE telescope in the laboratory angular range from 10 to 80 degrees, allowed to calculate number of lost charges ΔZ = 20 - Z versus the total excitation energy Esub(x) of the intermediate system, where Z represents the mean value of charge distributions at fixed Esub(x) and center-of-mass angle theta. The setting-up of an iterative calculation of ΔZ = f(Esub(x)) was necessary to access the Esub(x) energies before evaporation. The results are consistent with an energy thermalization hypothesis and show a linear increase of ΔZ versus Esub(x), independently of theta. The charge loss rate for one excited calcium varies from 1.6 for Esub(x) = 50 MeV to 4.8 for Esub(x) = 150 MeV [fr
Energy Technology Data Exchange (ETDEWEB)
Suyama, Kenya; Komuro, Yuichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Takada, Tomoyuki; Kawasaki, Hiromitsu; Ouchi, Keisuke
1998-02-01
This report is a user`s manual of the computer program MAIL3.1 which generates various types of cross section sets for neutron transport programs such as SIMCRI, ANISN-JR, KENO IV, KENO V, MULTI-KENO, MULTI-KENO-2 and MULTI-KENO-3.0. MAIL3.1 is a revised version of MAIL3.0 that was opened in 1990. It has all of abilities of MAIL3.0 and has two more functions as shown in following. 1. AMPX-type cross section set generating function for KENO V. 2. Enhanced function for user of 16 group Hansen-Roach library. (author)
Lawrence, Sarah T.; Willig, James H.; Crane, Heidi M.; Ye, Jiatao; Aban, Inmaculada; Lober, William; Nevin, Christa R.; Batey, D. Scott; Mugavero, Michael J.; McCullumsmith, Cheryl; Wright, Charles; Kitahata, Mari; Raper, James L.; Saag, Micheal S.; Schumacher, Joseph E.
2010-01-01
Summary The implementation of routine computer-based screening for suicidal ideation and other psychosocial domains through standardized patient reported outcome instruments in two high volume urban HIV clinics is described. Factors associated with an increased risk of self-reported suicidal ideation were determined. Background HIV/AIDS continues to be associated with an under-recognized risk for suicidal ideation, attempted as well as completed suicide. Suicidal ideation represents an important predictor for subsequent attempted and completed suicide. We sought to implement routine screening of suicidal ideation and associated conditions using computerized patient reported outcome (PRO) assessments. Methods Two geographically distinct academic HIV primary care clinics enrolled patients attending scheduled visits from 12/2005 to 2/2009. Touch-screen-based, computerized PRO assessments were implemented into routine clinical care. Substance abuse (ASSIST), alcohol consumption (AUDIT-C), depression (PHQ-9) and anxiety (PHQ-A) were assessed. The PHQ-9 assesses the frequency of suicidal ideation in the preceding two weeks. A response of “nearly every day” triggered an automated page to pre-determined clinic personnel who completed more detailed self-harm assessments. Results Overall 1,216 (UAB= 740; UW= 476) patients completed initial PRO assessment during the study period. Patients were white (53%; n=646), predominantly males (79%; n=959) with a mean age of 44 (± 10). Among surveyed patients, 170 (14%) endorsed some level of suicidal ideation, while 33 (3%) admitted suicidal ideation nearly every day. In multivariable analysis, suicidal ideation risk was lower with advancing age (OR=0.74 per 10 years;95%CI=0.58-0.96) and was increased with current substance abuse (OR=1.88;95%CI=1.03-3.44) and more severe depression (OR=3.91 moderate;95%CI=2.12-7.22; OR=25.55 severe;95%CI=12.73-51.30). Discussion Suicidal ideation was associated with current substance abuse and
International Nuclear Information System (INIS)
Imura, K; Fujibuchi, T; Hirata, H; Kaneko, K; Hamada, E
2016-01-01
Purpose: Patient set-up skills in radiotherapy treatment room have a great influence on treatment effect for image guided radiotherapy. In this study, we have developed the training system for improving practical set-up skills considering rotational correction in the virtual environment away from the pressure of actual treatment room by using three-dimensional computer graphic (3DCG) engine. Methods: The treatment room for external beam radiotherapy was reproduced in the virtual environment by using 3DCG engine (Unity). The viewpoints to perform patient set-up in the virtual treatment room were arranged in both sides of the virtual operable treatment couch to assume actual performance by two clinical staffs. The position errors to mechanical isocenter considering alignment between skin marker and laser on the virtual patient model were displayed by utilizing numerical values expressed in SI units and the directions of arrow marks. The rotational errors calculated with a point on the virtual body axis as the center of each rotation axis for the virtual environment were corrected by adjusting rotational position of the body phantom wound the belt with gyroscope preparing on table in a real space. These rotational errors were evaluated by describing vector outer product operations and trigonometric functions in the script for patient set-up technique. Results: The viewpoints in the virtual environment allowed individual user to visually recognize the position discrepancy to mechanical isocenter until eliminating the positional errors of several millimeters. The rotational errors between the two points calculated with the center point could be efficiently corrected to display the minimum technique mathematically by utilizing the script. Conclusion: By utilizing the script to correct the rotational errors as well as accurate positional recognition for patient set-up technique, the training system developed for improving patient set-up skills enabled individual user to
Energy Technology Data Exchange (ETDEWEB)
Imura, K [Division of Quantum Radiation Science, Department of Health Science, Graduate School of Medical Science, Kyushu University, Fukuoka (Japan); Fujibuchi, T; Hirata, H [Department of Health Science, Graduate School of Medical Science, Kyushu University, Fukuoka (Japan); Kaneko, K [Innovation Center for Educational Resource, Kyushu University, Fukuoka (Japan); Hamada, E [Cancer Treatment Center, Tobata Kyoritsu Hospital, Kitakyushu (Japan)
2016-06-15
Purpose: Patient set-up skills in radiotherapy treatment room have a great influence on treatment effect for image guided radiotherapy. In this study, we have developed the training system for improving practical set-up skills considering rotational correction in the virtual environment away from the pressure of actual treatment room by using three-dimensional computer graphic (3DCG) engine. Methods: The treatment room for external beam radiotherapy was reproduced in the virtual environment by using 3DCG engine (Unity). The viewpoints to perform patient set-up in the virtual treatment room were arranged in both sides of the virtual operable treatment couch to assume actual performance by two clinical staffs. The position errors to mechanical isocenter considering alignment between skin marker and laser on the virtual patient model were displayed by utilizing numerical values expressed in SI units and the directions of arrow marks. The rotational errors calculated with a point on the virtual body axis as the center of each rotation axis for the virtual environment were corrected by adjusting rotational position of the body phantom wound the belt with gyroscope preparing on table in a real space. These rotational errors were evaluated by describing vector outer product operations and trigonometric functions in the script for patient set-up technique. Results: The viewpoints in the virtual environment allowed individual user to visually recognize the position discrepancy to mechanical isocenter until eliminating the positional errors of several millimeters. The rotational errors between the two points calculated with the center point could be efficiently corrected to display the minimum technique mathematically by utilizing the script. Conclusion: By utilizing the script to correct the rotational errors as well as accurate positional recognition for patient set-up technique, the training system developed for improving patient set-up skills enabled individual user to
International Nuclear Information System (INIS)
Rady, E.A.; Kozae, A.M.; Abd El-Monsef, M.M.E.
2004-01-01
The process of analyzing data under uncertainty is a main goal for many real life problems. Statistical analysis for such data is an interested area for research. The aim of this paper is to introduce a new method concerning the generalization and modification of the rough set theory introduced early by Pawlak [Int. J. Comput. Inform. Sci. 11 (1982) 314
Bachmann, Laura H; Grimley, Diane M; Gao, Hongjiang; Aban, Inmaculada; Chen, Huey; Raper, James L; Saag, Michael S; Rhodes, Scott D; Hook, Edward W
2013-04-01
Innovative strategies are needed to assist providers with delivering secondary HIV prevention in the primary care setting. This longitudinal HIV clinic-based study conducted from 2004-2007 in a Birmingham, Alabama HIV primary care clinic tested a computer-assisted, provider-delivered intervention designed to increase condom use with oral, anal and vaginal sex, decrease numbers of sexual partners and increase HIV disclosure among HIV-positive men-who-have-sex-with-men (MSM). Significant declines were found for the number of unprotected insertive anal intercourse acts with HIV+ male partners during the intervention period (p = 0.0003) and with HIV-/UK male partners (p = 0.0007), as well as a 47% reduction in the number of male sexual partners within the preceding 6 months compared with baseline (p = 0.0008). These findings confirm and extend prior reports by demonstrating the effectiveness of computer-assisted, provider-delivered messaging to accomplish risk reduction in patients in the HIV primary care setting.
Incremental computation of set difference views
DEFF Research Database (Denmark)
Bækgaard, Lars; Mark, Leo
1997-01-01
mellemliggende databaseforandringer, når et view anvendes. I distribuerede omgivelser som f.eks. data warehousing omgivelser er inkrementelle metoder nødvendige af hensyn til inkrementel beregning af mængdedifferencer i relationelle databaser. Disse metoder kan med fordel benyttes som supplement til eksisterende...
International Nuclear Information System (INIS)
Quaia, Emilio; Krug, Lee M.; Pandit-Taskar, Neeta; Nagel, Andrew; Reuter, Victor E.; Humm, John; Divgi, Chaitanya
2008-01-01
Aim: To assess the value of data set coregistration of gamma camera and computed tomography (CT) in the assessment of targeting of humanized monoclonal antibody 3S193 labeled with indium-111 ( 111 In-hu3S193) to small cell lung cancer (SCLC). Methods and materials: Ten patients (6 male and 4 female; mean age ± S.D., 60 ± 4 years), from an overall population of 20 patients with SCLCs expressing Lewis Y antigen at immunohistochemical analysis, completed a four weekly injections of 111 In-hu3S193 and underwent gamma camera imaging. All had had, as part of their baseline evaluation, Fluorine18 fluoro-2-deoxyglucose (FDG) positron emission tomography/computed tomography (PET/CT). Two readers in consensus retrospectively coregistered the gamma camera images with the CT component of the FDG PET/CT by automatic or manual alignment. The resulting image sets were visually examined and SCLC lesions targeting at coregistered gamma camera and CT was correlated side-by-side with the 18 F-FDG uptake. Results: A total number of 31 lesions from SCLC with a thoracic (n = 13) or extrathoracic location (n = 18) were all positive on FDG PET/CT. Coregistration of the gamma camera to the CT demonstrated targeting of antibody to all lesions >2 cm (n = 20) and in a few lesions ≤2 cm (n = 2), with no visualization of most lesions ≤2 cm (n = 9). No 111 In-hu3S193 uptake in normal tissues was observed. Conclusion: Coregistration of antibody gamma camera imaging to FDG PET/CT is feasible and allows valuable assessment of 111 In-hu3S193 antibody targeting to SCLC lesions >2 cm, while lesions ≤2 cm reveal a limited targeting
Weber, Rebecca
2012-01-01
What can we compute--even with unlimited resources? Is everything within reach? Or are computations necessarily drastically limited, not just in practice, but theoretically? These questions are at the heart of computability theory. The goal of this book is to give the reader a firm grounding in the fundamentals of computability theory and an overview of currently active areas of research, such as reverse mathematics and algorithmic randomness. Turing machines and partial recursive functions are explored in detail, and vital tools and concepts including coding, uniformity, and diagonalization are described explicitly. From there the material continues with universal machines, the halting problem, parametrization and the recursion theorem, and thence to computability for sets, enumerability, and Turing reduction and degrees. A few more advanced topics round out the book before the chapter on areas of research. The text is designed to be self-contained, with an entire chapter of preliminary material including re...
Ellerman, David
2014-03-01
In models of QM over finite fields (e.g., Schumacher's ``modal quantum theory'' MQT), one finite field stands out, Z2, since Z2 vectors represent sets. QM (finite-dimensional) mathematics can be transported to sets resulting in quantum mechanics over sets or QM/sets. This gives a full probability calculus (unlike MQT with only zero-one modalities) that leads to a fulsome theory of QM/sets including ``logical'' models of the double-slit experiment, Bell's Theorem, QIT, and QC. In QC over Z2 (where gates are non-singular matrices as in MQT), a simple quantum algorithm (one gate plus one function evaluation) solves the Parity SAT problem (finding the parity of the sum of all values of an n-ary Boolean function). Classically, the Parity SAT problem requires 2n function evaluations in contrast to the one function evaluation required in the quantum algorithm. This is quantum speedup but with all the calculations over Z2 just like classical computing. This shows definitively that the source of quantum speedup is not in the greater power of computing over the complex numbers, and confirms the idea that the source is in superposition.
Implementation of Steiner point of fuzzy set.
Liang, Jiuzhen; Wang, Dejiang
2014-01-01
This paper deals with the implementation of Steiner point of fuzzy set. Some definitions and properties of Steiner point are investigated and extended to fuzzy set. This paper focuses on establishing efficient methods to compute Steiner point of fuzzy set. Two strategies of computing Steiner point of fuzzy set are proposed. One is called linear combination of Steiner points computed by a series of crisp α-cut sets of the fuzzy set. The other is an approximate method, which is trying to find the optimal α-cut set approaching the fuzzy set. Stability analysis of Steiner point of fuzzy set is also studied. Some experiments on image processing are given, in which the two methods are applied for implementing Steiner point of fuzzy image, and both strategies show their own advantages in computing Steiner point of fuzzy set.
DEFF Research Database (Denmark)
Hansen, Peter Reinhard; Lunde, Asger; Nason, James M.
The paper introduces the model confidence set (MCS) and applies it to the selection of models. A MCS is a set of models that is constructed such that it will contain the best model with a given level of confidence. The MCS is in this sense analogous to a confidence interval for a parameter. The MCS......, beyond the comparison of models. We apply the MCS procedure to two empirical problems. First, we revisit the inflation forecasting problem posed by Stock and Watson (1999), and compute the MCS for their set of inflation forecasts. Second, we compare a number of Taylor rule regressions and determine...... the MCS of the best in terms of in-sample likelihood criteria....
Energy Technology Data Exchange (ETDEWEB)
Svozil, K. [Univ. of Technology, Vienna (Austria)
1995-11-01
Inasmuch as physical theories are formalizable, set theory provides a framework for theoretical physics. Four speculations about the relevance of set theoretical modeling for physics are presented: the role of transcendental set theory (i) in chaos theory, (ii) for paradoxical decompositions of solid three-dimensional objects, (iii) in the theory of effective computability (Church-Turing thesis) related to the possible {open_quotes}solution of supertasks,{close_quotes} and (iv) for weak solutions. Several approaches to set theory and their advantages and disadvantages for physical applications are discussed: Cantorian {open_quotes}naive{close_quotes} (i.e., nonaxiomatic) set theory, contructivism, and operationalism. In the author`s opinion, an attitude, of {open_quotes}suspended attention{close_quotes} (a term borrowed from psychoanalysis) seems most promising for progress. Physical and set theoretical entities must be operationalized wherever possible. At the same time, physicists should be open to {open_quotes}bizarre{close_quotes} or {open_quotes}mindboggling{close_quotes} new formalisms, which need not be operationalizable or testable at the time of their creation, but which may successfully lead to novel fields of phenomenology and technology.
BONFIRE: benchmarking computers and computer networks
Bouckaert, Stefan; Vanhie-Van Gerwen, Jono; Moerman, Ingrid; Phillips, Stephen; Wilander, Jerker
2011-01-01
The benchmarking concept is not new in the field of computing or computer networking. With “benchmarking tools”, one usually refers to a program or set of programs, used to evaluate the performance of a solution under certain reference conditions, relative to the performance of another solution. Since the 1970s, benchmarking techniques have been used to measure the performance of computers and computer networks. Benchmarking of applications and virtual machines in an Infrastructure-as-a-Servi...
Directory of Open Access Journals (Sweden)
Evgeniy K. Khenner
2016-01-01
Full Text Available Abstract. The aim of the research is to draw attention of the educational community to the phenomenon of computational thinking which actively discussed in the last decade in the foreign scientific and educational literature, to substantiate of its importance, practical utility and the right on affirmation in Russian education.Methods. The research is based on the analysis of foreign studies of the phenomenon of computational thinking and the ways of its formation in the process of education; on comparing the notion of «computational thinking» with related concepts used in the Russian scientific and pedagogical literature.Results. The concept «computational thinking» is analyzed from the point of view of intuitive understanding and scientific and applied aspects. It is shown as computational thinking has evolved in the process of development of computers hardware and software. The practice-oriented interpretation of computational thinking which dominant among educators is described along with some ways of its formation. It is shown that computational thinking is a metasubject result of general education as well as its tool. From the point of view of the author, purposeful development of computational thinking should be one of the tasks of the Russian education.Scientific novelty. The author gives a theoretical justification of the role of computational thinking schemes as metasubject results of learning. The dynamics of the development of this concept is described. This process is connected with the evolution of computer and information technologies as well as increase of number of the tasks for effective solutions of which computational thinking is required. Author substantiated the affirmation that including «computational thinking » in the set of pedagogical concepts which are used in the national education system fills an existing gap.Practical significance. New metasubject result of education associated with
Prospects of experimentally reachable beyond Standard Model ...
Indian Academy of Sciences (India)
2016-01-06
Jan 6, 2016 ... Since then, the theory has been experimentally verified to a ... Despite the fact that SM has unravelled the gauge origin of fundamental forces and the structure of Universe while successfully confronting numerous ...
Criteria for reachability of quantum states
Energy Technology Data Exchange (ETDEWEB)
Schirmer, S.G.; Solomon, A.I. [Quantum Processes Group and Department of Applied Maths, Open University, Milton Keynes (United Kingdom)]. E-mails: S.G.Schirmer@open.ac.uk; A.I.Solomon@open.ac.uk; Leahy, J.V. [Department of Mathematics and Institute of Theoretical Science, University of Oregon, Eugene, OR (United States)]. E-mail: leahy@math.uoregon.edu
2002-10-11
We address the question of which quantum states can be inter-converted under the action of a time-dependent Hamiltonian. In particular, we consider the problem as applied to mixed states, and investigate the difference between pure- and mixed-state controllabilities introduced in previous work. We provide a complete characterization of the eigenvalue spectrum for which the state is controllable under the action of the symplectic group. We also address the problem of which states can be prepared if the dynamical Lie group is not sufficiently large to allow the system to be controllable. (author)
DEFF Research Database (Denmark)
Vatrapu, Ravi; Hussain, Abid; Buus Lassen, Niels
2015-01-01
of Facebook or Twitter data. However, there exist no other holistic computational social science approach beyond the relational sociology and graph theory of SNA. To address this limitation, this paper presents an alternative holistic approach to Big Social Data analytics called Social Set Analysis (SSA......This paper argues that the basic premise of Social Network Analysis (SNA) -- namely that social reality is constituted by dyadic relations and that social interactions are determined by structural properties of networks-- is neither necessary nor sufficient, for Big Social Data analytics...
Functional Multiple-Set Canonical Correlation Analysis
Hwang, Heungsun; Jung, Kwanghee; Takane, Yoshio; Woodward, Todd S.
2012-01-01
We propose functional multiple-set canonical correlation analysis for exploring associations among multiple sets of functions. The proposed method includes functional canonical correlation analysis as a special case when only two sets of functions are considered. As in classical multiple-set canonical correlation analysis, computationally, the…
International Nuclear Information System (INIS)
Hacker, M.; Hack, N.; Tiling, R.; Jakobs, T.; Nikolaou, K.; Becker, C.; Ziegler, F. von; Knez, A.; Koenig, A.; Klauss, V.
2007-01-01
Aim: In patients with stable angina pectoris both morphological and functional information about the coronary artery tree should be present before revascularization therapy is performed. High accuracy was shown for spiral computed tomography (MDCT) angiography acquired with a 64-slice CT scanner compared to invasive coronary angiography (ICA) in detecting obstructive'' coronary artery disease (CAD). Gated myocardial SPECT (MPI) is an established method for the noninvasive assessment of functional significance of coronary stenoses. Aim of the study was to evaluate the combination of 64-slice CT angiography plus MPI in comparison to ICA plus MPI in the detection of hemodynamically relevant coronary artery stenoses in a clinical setting. Patients, methods: 30 patients (63 ± 10.8 years, 23 men) with stable angina (21 with suspected, 9 with known CAD) were investigated. MPI, 64-slice CT angiography and ICA were performed, reversible and fixed perfusion defects were allocated to determining lesions separately for MDCT angiography and ICA. The combination of MDCT angiography plus MPI was compared to the results of ICA plus MPI. Results: Sensitivity, specificity, negative and positive predictive value for the combination of MDCT angiography plus MPI was 85%, 97%, 98% and 79%, respectively, on a vessel-based and 93%, 87%, 93% and 88%, respectively, on a patient-based level. 19 coronary arteries with stenoses =50% in both ICA and MDCT angiography showed no ischemia in MPI. Conclusion: The combination of 64-slice CT angiography and gated myocardial SPECT enabled a comprehensive non-invasive view of the anatomical and functional status of the coronary artery tree. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Hacker, M.; Hack, N.; Tiling, R. [Klinikum Grosshadern (Germany). Dept. of Nuclear Medicine; Jakobs, T.; Nikolaou, K.; Becker, C. [Klinikum Grosshadern (Germany). Dept. of Clinical Radiology; Ziegler, F. von; Knez, A. [Klinikum Grosshadern (Germany). Dept. of Cardiology; Koenig, A.; Klauss, V. [Medizinische Poliklinik-Innenstadt, Univ. of Munich (Germany). Dept. of Cardiology
2007-07-01
Aim: In patients with stable angina pectoris both morphological and functional information about the coronary artery tree should be present before revascularization therapy is performed. High accuracy was shown for spiral computed tomography (MDCT) angiography acquired with a 64-slice CT scanner compared to invasive coronary angiography (ICA) in detecting ''obstructive'' coronary artery disease (CAD). Gated myocardial SPECT (MPI) is an established method for the noninvasive assessment of functional significance of coronary stenoses. Aim of the study was to evaluate the combination of 64-slice CT angiography plus MPI in comparison to ICA plus MPI in the detection of hemodynamically relevant coronary artery stenoses in a clinical setting. Patients, methods: 30 patients (63 {+-} 10.8 years, 23 men) with stable angina (21 with suspected, 9 with known CAD) were investigated. MPI, 64-slice CT angiography and ICA were performed, reversible and fixed perfusion defects were allocated to determining lesions separately for MDCT angiography and ICA. The combination of MDCT angiography plus MPI was compared to the results of ICA plus MPI. Results: Sensitivity, specificity, negative and positive predictive value for the combination of MDCT angiography plus MPI was 85%, 97%, 98% and 79%, respectively, on a vessel-based and 93%, 87%, 93% and 88%, respectively, on a patient-based level. 19 coronary arteries with stenoses =50% in both ICA and MDCT angiography showed no ischemia in MPI. Conclusion: The combination of 64-slice CT angiography and gated myocardial SPECT enabled a comprehensive non-invasive view of the anatomical and functional status of the coronary artery tree. (orig.)
Rough set classification based on quantum logic
Hassan, Yasser F.
2017-11-01
By combining the advantages of quantum computing and soft computing, the paper shows that rough sets can be used with quantum logic for classification and recognition systems. We suggest the new definition of rough set theory as quantum logic theory. Rough approximations are essential elements in rough set theory, the quantum rough set model for set-valued data directly construct set approximation based on a kind of quantum similarity relation which is presented here. Theoretical analyses demonstrate that the new model for quantum rough sets has new type of decision rule with less redundancy which can be used to give accurate classification using principles of quantum superposition and non-linear quantum relations. To our knowledge, this is the first attempt aiming to define rough sets in representation of a quantum rather than logic or sets. The experiments on data-sets have demonstrated that the proposed model is more accuracy than the traditional rough sets in terms of finding optimal classifications.
Computed tomography for radiographers
International Nuclear Information System (INIS)
Brooker, M.
1986-01-01
Computed tomography is regarded by many as a complicated union of sophisticated x-ray equipment and computer technology. This book overcomes these complexities. The rigid technicalities of the machinery and the clinical aspects of computed tomography are discussed including the preparation of patients, both physically and mentally, for scanning. Furthermore, the author also explains how to set up and run a computed tomography department, including advice on how the room should be designed
Computational physics an introduction
Vesely, Franz J
1994-01-01
Author Franz J. Vesely offers students an introductory text on computational physics, providing them with the important basic numerical/computational techniques. His unique text sets itself apart from others by focusing on specific problems of computational physics. The author also provides a selection of modern fields of research. Students will benefit from the appendixes which offer a short description of some properties of computing and machines and outline the technique of 'Fast Fourier Transformation.'
Robb, P; Pawlowski, B
1990-05-01
The results of measuring the ray trace speed and compilation speed of thirty-nine computers in fifty-seven configurations, ranging from personal computers to super computers, are described. A correlation of ray trace speed has been made with the LINPACK benchmark which allows the ray trace speed to be estimated using LINPACK performance data. The results indicate that the latest generation of workstations, using CPUs based on RISC (Reduced Instruction Set Computer) technology, are as fast or faster than mainframe computers in compute-bound situations.
Instruction Set Architectures for Quantum Processing Units
Britt, Keith A.; Humble, Travis S.
2017-01-01
Progress in quantum computing hardware raises questions about how these devices can be controlled, programmed, and integrated with existing computational workflows. We briefly describe several prominent quantum computational models, their associated quantum processing units (QPUs), and the adoption of these devices as accelerators within high-performance computing systems. Emphasizing the interface to the QPU, we analyze instruction set architectures based on reduced and complex instruction s...
Probabilistic Open Set Recognition
Jain, Lalit Prithviraj
Real-world tasks in computer vision, pattern recognition and machine learning often touch upon the open set recognition problem: multi-class recognition with incomplete knowledge of the world and many unknown inputs. An obvious way to approach such problems is to develop a recognition system that thresholds probabilities to reject unknown classes. Traditional rejection techniques are not about the unknown; they are about the uncertain boundary and rejection around that boundary. Thus traditional techniques only represent the "known unknowns". However, a proper open set recognition algorithm is needed to reduce the risk from the "unknown unknowns". This dissertation examines this concept and finds existing probabilistic multi-class recognition approaches are ineffective for true open set recognition. We hypothesize the cause is due to weak adhoc assumptions combined with closed-world assumptions made by existing calibration techniques. Intuitively, if we could accurately model just the positive data for any known class without overfitting, we could reject the large set of unknown classes even under this assumption of incomplete class knowledge. For this, we formulate the problem as one of modeling positive training data by invoking statistical extreme value theory (EVT) near the decision boundary of positive data with respect to negative data. We provide a new algorithm called the PI-SVM for estimating the unnormalized posterior probability of class inclusion. This dissertation also introduces a new open set recognition model called Compact Abating Probability (CAP), where the probability of class membership decreases in value (abates) as points move from known data toward open space. We show that CAP models improve open set recognition for multiple algorithms. Leveraging the CAP formulation, we go on to describe the novel Weibull-calibrated SVM (W-SVM) algorithm, which combines the useful properties of statistical EVT for score calibration with one-class and binary
Ranking Specific Sets of Objects.
Maly, Jan; Woltran, Stefan
2017-01-01
Ranking sets of objects based on an order between the single elements has been thoroughly studied in the literature. In particular, it has been shown that it is in general impossible to find a total ranking - jointly satisfying properties as dominance and independence - on the whole power set of objects. However, in many applications certain elements from the entire power set might not be required and can be neglected in the ranking process. For instance, certain sets might be ruled out due to hard constraints or are not satisfying some background theory. In this paper, we treat the computational problem whether an order on a given subset of the power set of elements satisfying different variants of dominance and independence can be found, given a ranking on the elements. We show that this problem is tractable for partial rankings and NP-complete for total rankings.
Cloud Computing for radiologists.
Kharat, Amit T; Safvi, Amjad; Thind, Ss; Singh, Amarjit
2012-07-01
Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.
Cloud Computing for radiologists
International Nuclear Information System (INIS)
Kharat, Amit T; Safvi, Amjad; Thind, SS; Singh, Amarjit
2012-01-01
Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future
Cloud computing for radiologists
Directory of Open Access Journals (Sweden)
Amit T Kharat
2012-01-01
Full Text Available Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.
Quantum computing and spintronics
International Nuclear Information System (INIS)
Kantser, V.
2007-01-01
Tentative to build a computer, which can operate according to the quantum laws, has leaded to concept of quantum computing algorithms and hardware. In this review we highlight recent developments which point the way to quantum computing on the basis solid state nanostructures after some general considerations concerning quantum information science and introducing a set of basic requirements for any quantum computer proposal. One of the major direction of research on the way to quantum computing is to exploit the spin (in addition to the orbital) degree of freedom of the electron, giving birth to the field of spintronics. We address some semiconductor approach based on spin orbit coupling in semiconductor nanostructures. (authors)
UpSet: Visualization of Intersecting Sets
Lex, Alexander; Gehlenborg, Nils; Strobelt, Hendrik; Vuillemot, Romain; Pfister, Hanspeter
2016-01-01
Understanding relationships between sets is an important analysis task that has received widespread attention in the visualization community. The major challenge in this context is the combinatorial explosion of the number of set intersections if the number of sets exceeds a trivial threshold. In this paper we introduce UpSet, a novel visualization technique for the quantitative analysis of sets, their intersections, and aggregates of intersections. UpSet is focused on creating task-driven aggregates, communicating the size and properties of aggregates and intersections, and a duality between the visualization of the elements in a dataset and their set membership. UpSet visualizes set intersections in a matrix layout and introduces aggregates based on groupings and queries. The matrix layout enables the effective representation of associated data, such as the number of elements in the aggregates and intersections, as well as additional summary statistics derived from subset or element attributes. Sorting according to various measures enables a task-driven analysis of relevant intersections and aggregates. The elements represented in the sets and their associated attributes are visualized in a separate view. Queries based on containment in specific intersections, aggregates or driven by attribute filters are propagated between both views. We also introduce several advanced visual encodings and interaction methods to overcome the problems of varying scales and to address scalability. UpSet is web-based and open source. We demonstrate its general utility in multiple use cases from various domains. PMID:26356912
On Intuitionistic Fuzzy Sets Theory
Atanassov, Krassimir T
2012-01-01
This book aims to be a comprehensive and accurate survey of state-of-art research on intuitionistic fuzzy sets theory and could be considered a continuation and extension of the author´s previous book on Intuitionistic Fuzzy Sets, published by Springer in 1999 (Atanassov, Krassimir T., Intuitionistic Fuzzy Sets, Studies in Fuzziness and soft computing, ISBN 978-3-7908-1228-2, 1999). Since the aforementioned book has appeared, the research activity of the author within the area of intuitionistic fuzzy sets has been expanding into many directions. The results of the author´s most recent work covering the past 12 years as well as the newest general ideas and open problems in this field have been therefore collected in this new book.
Introduction to morphogenetic computing
Resconi, Germano; Xu, Guanglin
2017-01-01
This book offers a concise introduction to morphogenetic computing, showing that its use makes global and local relations, defects in crystal non-Euclidean geometry databases with source and sink, genetic algorithms, and neural networks more stable and efficient. It also presents applications to database, language, nanotechnology with defects, biological genetic structure, electrical circuit, and big data structure. In Turing machines, input and output states form a system – when the system is in one state, the input is transformed into output. This computation is always deterministic and without any possible contradiction or defects. In natural computation there are defects and contradictions that have to be solved to give a coherent and effective computation. The new computation generates the morphology of the system that assumes different forms in time. Genetic process is the prototype of the morphogenetic computing. At the Boolean logic truth value, we substitute a set of truth (active sets) values with...
DEFF Research Database (Denmark)
Thomas, Graham; Gade, Rikke; Moeslund, Thomas B.
2017-01-01
fixed to players or equipment is generally not possible. This provides a rich set of opportunities for the application of computer vision techniques to help the competitors, coaches and audience. This paper discusses a selection of current commercial applications that use computer vision for sports...
Learning with Ubiquitous Computing
Rosenheck, Louisa
2008-01-01
If ubiquitous computing becomes a reality and is widely adopted, it will inevitably have an impact on education. This article reviews the background of ubiquitous computing and current research projects done involving educational "ubicomp." Finally it explores how ubicomp may and may not change education in both formal and informal settings and…
Human Computer Music Performance
Dannenberg, Roger B.
2012-01-01
Human Computer Music Performance (HCMP) is the study of music performance by live human performers and real-time computer-based performers. One goal of HCMP is to create a highly autonomous artificial performer that can fill the role of a human, especially in a popular music setting. This will require advances in automated music listening and understanding, new representations for music, techniques for music synchronization, real-time human-computer communication, music generation, sound synt...
Discrete computational structures
Korfhage, Robert R
1974-01-01
Discrete Computational Structures describes discrete mathematical concepts that are important to computing, covering necessary mathematical fundamentals, computer representation of sets, graph theory, storage minimization, and bandwidth. The book also explains conceptual framework (Gorn trees, searching, subroutines) and directed graphs (flowcharts, critical paths, information network). The text discusses algebra particularly as it applies to concentrates on semigroups, groups, lattices, propositional calculus, including a new tabular method of Boolean function minimization. The text emphasize
Fuzzy sets, rough sets, multisets and clustering
Dahlbom, Anders; Narukawa, Yasuo
2017-01-01
This book is dedicated to Prof. Sadaaki Miyamoto and presents cutting-edge papers in some of the areas in which he contributed. Bringing together contributions by leading researchers in the field, it concretely addresses clustering, multisets, rough sets and fuzzy sets, as well as their applications in areas such as decision-making. The book is divided in four parts, the first of which focuses on clustering and classification. The second part puts the spotlight on multisets, bags, fuzzy bags and other fuzzy extensions, while the third deals with rough sets. Rounding out the coverage, the last part explores fuzzy sets and decision-making.
International Nuclear Information System (INIS)
Lloyd, S.
1992-01-01
Digital computers are machines that can be programmed to perform logical and arithmetical operations. Contemporary digital computers are ''universal,'' in the sense that a program that runs on one computer can, if properly compiled, run on any other computer that has access to enough memory space and time. Any one universal computer can simulate the operation of any other; and the set of tasks that any such machine can perform is common to all universal machines. Since Bennett's discovery that computation can be carried out in a non-dissipative fashion, a number of Hamiltonian quantum-mechanical systems have been proposed whose time-evolutions over discrete intervals are equivalent to those of specific universal computers. The first quantum-mechanical treatment of computers was given by Benioff, who exhibited a Hamiltonian system with a basis whose members corresponded to the logical states of a Turing machine. In order to make the Hamiltonian local, in the sense that its structure depended only on the part of the computation being performed at that time, Benioff found it necessary to make the Hamiltonian time-dependent. Feynman discovered a way to make the computational Hamiltonian both local and time-independent by incorporating the direction of computation in the initial condition. In Feynman's quantum computer, the program is a carefully prepared wave packet that propagates through different computational states. Deutsch presented a quantum computer that exploits the possibility of existing in a superposition of computational states to perform tasks that a classical computer cannot, such as generating purely random numbers, and carrying out superpositions of computations as a method of parallel processing. In this paper, we show that such computers, by virtue of their common function, possess a common form for their quantum dynamics
Computational thinking as an emerging competence domain
Yadav, A.; Good, J.; Voogt, J.; Fisser, P.; Mulder, M.
2016-01-01
Computational thinking is a problem-solving skill set, which includes problem decomposition, algorithmic thinking, abstraction, and automation. Even though computational thinking draws upon concepts fundamental to computer science (CS), it has broad application to all disciplines. It has been
Computational force, mass, and energy
International Nuclear Information System (INIS)
Numrich, R.W.
1997-01-01
This paper describes a correspondence between computational quantities commonly used to report computer performance measurements and mechanical quantities from classical Newtonian mechanics. It defines a set of three fundamental computational quantities that are sufficient to establish a system of computational measurement. From these quantities, it defines derived computational quantities that have analogous physical counterparts. These computational quantities obey three laws of motion in computational space. The solutions to the equations of motion, with appropriate boundary conditions, determine the computational mass of the computer. Computational forces, with magnitudes specific to each instruction and to each computer, overcome the inertia represented by this mass. The paper suggests normalizing the computational mass scale by picking the mass of a register on the CRAY-1 as the standard unit of mass
Programming in biomolecular computation
DEFF Research Database (Denmark)
Hartmann, Lars Røeboe; Jones, Neil; Simonsen, Jakob Grue
2011-01-01
Our goal is to provide a top-down approach to biomolecular computation. In spite of widespread discussion about connections between biology and computation, one question seems notable by its absence: Where are the programs? We identify a number of common features in programming that seem...... conspicuously absent from the literature on biomolecular computing; to partially redress this absence, we introduce a model of computation that is evidently programmable, by programs reminiscent of low-level computer machine code; and at the same time biologically plausible: its functioning is defined...... by a single and relatively small set of chemical-like reaction rules. Further properties: the model is stored-program: programs are the same as data, so programs are not only executable, but are also compilable and interpretable. It is universal: all computable functions can be computed (in natural ways...
International Nuclear Information System (INIS)
Brockhoff, R.C.; Hendricks, J.S.
1994-09-01
The MCNP test set is used to test the MCNP code after installation on various computer platforms. For MCNP4 and MCNP4A this test set included 25 test problems designed to test as many features of the MCNP code as possible. A new and better test set has been devised to increase coverage of the code from 85% to 97% with 28 problems. The new test set is as fast as and shorter than the MCNP4A test set. The authors describe the methodology for devising the new test set, the features that were not covered in the MCNP4A test set, and the changes in the MCNP4A test set that have been made for MCNP4B and its developmental versions. Finally, new bugs uncovered by the new test set and a compilation of all known MCNP4A bugs are presented
CERN. Geneva
2011-01-01
The past decade has witnessed a momentous transformation in the way people interact with each other. Content is now co-produced, shared, classified, and rated by millions of people, while attention has become the ephemeral and valuable resource that everyone seeks to acquire. This talk will describe how social attention determines the production and consumption of content within both the scientific community and social media, how its dynamics can be used to predict the future and the role that social media plays in setting the public agenda. About the speaker Bernardo Huberman is a Senior HP Fellow and Director of the Social Computing Lab at Hewlett Packard Laboratories. He received his Ph.D. in Physics from the University of Pennsylvania, and is currently a Consulting Professor in the Department of Applied Physics at Stanford University. He originally worked in condensed matter physics, ranging from superionic conductors to two-dimensional superfluids, and made contributions to the theory of critical p...
Woods, Damien; Naughton, Thomas J.
2008-01-01
We consider optical computers that encode data using images and compute by transforming such images. We give an overview of a number of such optical computing architectures, including descriptions of the type of hardware commonly used in optical computing, as well as some of the computational efficiencies of optical devices. We go on to discuss optical computing from the point of view of computational complexity theory, with the aim of putting some old, and some very recent, re...
Fast Sparse Level Sets on Graphics Hardware
Jalba, Andrei C.; Laan, Wladimir J. van der; Roerdink, Jos B.T.M.
The level-set method is one of the most popular techniques for capturing and tracking deformable interfaces. Although level sets have demonstrated great potential in visualization and computer graphics applications, such as surface editing and physically based modeling, their use for interactive
Computer Access and Flowcharting as Variables in Learning Computer Programming.
Ross, Steven M.; McCormick, Deborah
Manipulation of flowcharting was crossed with in-class computer access to examine flowcharting effects in the traditional lecture/laboratory setting and in a classroom setting where online time was replaced with manual simulation. Seventy-two high school students (24 male and 48 female) enrolled in a computer literacy course served as subjects.…
DEFF Research Database (Denmark)
Avery, John Scales; Rettrup, Sten; Avery, James Emil
automatically with computer techniques. The method has a wide range of applicability, and can be used to solve difficult eigenvalue problems in a number of fields. The book is of special interest to quantum theorists, computer scientists, computational chemists and applied mathematicians....
Directory of Open Access Journals (Sweden)
Antonio P. BERBER SARDINHA
1999-02-01
Full Text Available This study presents a methodology for the identification of coherent word sets. Eight sets were initially identified and further grouped into two main sets: a `company' set and a `non-company' set. These two sets shared very few collocates, and therefore they seemed to represent distinct topics. The positions of the words in the `company' and `non-company' sets across the text were computed. The results indicated that the `non-company' sets referred to `company' implicitly. Finally, the key words were compared to an automatic abridgment of the text which revealed that nearly all key words were present in the ahridgment. This was interpreted as suggesting that the key words may indeed represent the main contents of the text.Este estudo apresenta uma metodologia para a identificação de conjuntos de palavras coerentes. Oito conjuntos foram identificados inicialmente e posteriormente agrupados em dois conjuntos principais: um conjunto denominado `companhia' e outro denominado `não-companhia'. Estes dois conjuntos partilham alguns colocados, e portanto parecem representar tópicos distintos. A posição das palavras de ambos os conjuntos foi computada ao longo do texto analisado. Os resultados indicaram que os conjuntos `não-companhia' se referiam indiretamente à companhia. Por fim, as palavras-chave dos conjuntos foram comparadas a um resumo do texto automático gerado por computador o qual revelou que quase todas as palavras-chave estavam presentes no resumo. Este fato foi interpretado como indício de que as palavras-chave representam o conteúdo central do texto.
International Nuclear Information System (INIS)
Worrell, R.B.
1985-05-01
The Set Equation Transformation System (SETS) is used to achieve the symbolic manipulation of Boolean equations. Symbolic manipulation involves changing equations from their original forms into more useful forms - particularly by applying Boolean identities. The SETS program is an interpreter which reads, interprets, and executes SETS user programs. The user writes a SETS user program specifying the processing to be achieved and submits it, along with the required data, for execution by SETS. Because of the general nature of SETS, i.e., the capability to manipulate Boolean equations regardless of their origin, the program has been used for many different kinds of analysis
Physicists set new record for network data transfer
2007-01-01
"An international team of physicists, computer scientists, and network engineers joined forces to set new records for sustained data transfer between storage systems durint the SuperComputing 2006 (SC06) Bandwidth Challenge (BWC). (3 pages)
Accuracy in Robot Generated Image Data Sets
DEFF Research Database (Denmark)
Aanæs, Henrik; Dahl, Anders Bjorholm
2015-01-01
In this paper we present a practical innovation concerning how to achieve high accuracy of camera positioning, when using a 6 axis industrial robots to generate high quality data sets for computer vision. This innovation is based on the realization that to a very large extent the robots positioning...... error is deterministic, and can as such be calibrated away. We have successfully used this innovation in our efforts for creating data sets for computer vision. Since the use of this innovation has a significant effect on the data set quality, we here present it in some detail, to better aid others...
International Nuclear Information System (INIS)
Bauer, H.; Black, I.; Heusler, A.; Hoeptner, G.; Krafft, F.; Lang, R.; Moellenkamp, R.; Mueller, W.; Mueller, W.F.; Schati, C.; Schmidt, A.; Schwind, D.; Weber, G.
1983-01-01
The computer groups has been reorganized to take charge for the general purpose computers DEC10 and VAX and the computer network (Dataswitch, DECnet, IBM - connections to GSI and IPP, preparation for Datex-P). (orig.)
Moncarz, Roger
2000-01-01
Looks at computer engineers and describes their job, employment outlook, earnings, and training and qualifications. Provides a list of resources related to computer engineering careers and the computer industry. (JOW)
Cook, Perry R.
This chapter covers algorithms, technologies, computer languages, and systems for computer music. Computer music involves the application of computers and other digital/electronic technologies to music composition, performance, theory, history, and the study of perception. The field combines digital signal processing, computational algorithms, computer languages, hardware and software systems, acoustics, psychoacoustics (low-level perception of sounds from the raw acoustic signal), and music cognition (higher-level perception of musical style, form, emotion, etc.).
Directory of Open Access Journals (Sweden)
Bruno Barras
2010-01-01
Full Text Available This work is about formalizing models of various type theories of the Calculus of Constructions family. Here we focus on set theoretical models. The long-term goal is to build a formal set theoretical model of the Calculus of Inductive Constructions, so we can be sure that Coq is consistent with the language used by most mathematicians.One aspect of this work is to axiomatize several set theories: ZF possibly with inaccessible cardinals, and HF, the theory of hereditarily finite sets. On top of these theories we have developped a piece of the usual set theoretical construction of functions, ordinals and fixpoint theory. We then proved sound several models of the Calculus of Constructions, its extension with an infinite hierarchy of universes, and its extension with the inductive type of natural numbers where recursion follows the type-based termination approach.The other aspect is to try and discharge (most of these assumptions. The goal here is rather to compare the theoretical strengths of all these formalisms. As already noticed by Werner, the replacement axiom of ZF in its general form seems to require a type-theoretical axiom of choice (TTAC.
Morozov, Albert D; Dragunov, Timothy N; Malysheva, Olga V
1999-01-01
This book deals with the visualization and exploration of invariant sets (fractals, strange attractors, resonance structures, patterns etc.) for various kinds of nonlinear dynamical systems. The authors have created a special Windows 95 application called WInSet, which allows one to visualize the invariant sets. A WInSet installation disk is enclosed with the book.The book consists of two parts. Part I contains a description of WInSet and a list of the built-in invariant sets which can be plotted using the program. This part is intended for a wide audience with interests ranging from dynamical
Setting priorities for safeguards upgrades
International Nuclear Information System (INIS)
Al-Ayat, R.A.; Judd, B.R.; Patenaude, C.J.; Sicherman, A.
1987-01-01
This paper describes an analytic approach and a computer program for setting priorities among safeguards upgrades. The approach provides safeguards decision makers with a systematic method for allocating their limited upgrade resources. The priorities are set based on the upgrades cost and their contribution to safeguards effectiveness. Safeguards effectiveness is measured by the probability of defeat for a spectrum of potential insider and outsider adversaries. The computer program, MI$ER, can be used alone or as a companion to ET and SAVI, programs designed to evaluate safeguards effectiveness against insider and outsider threats, respectively. Setting the priority required judgments about the relative importance (threat likelihoods and consequences) of insider and outsider threats. Although these judgments are inherently subjective, MI$ER can analyze the sensitivity of the upgrade priorities to these weights and determine whether or not they are critical to the priority ranking. MI$ER produces tabular and graphical results for comparing benefits and identifying the most cost-effective upgrades for a given expenditure. This framework provides decision makers with an explicit and consistent analysis to support their upgrades decisions and to allocate the safeguards resources in a cost-effective manner
Algorithms for detecting and analysing autocatalytic sets.
Hordijk, Wim; Smith, Joshua I; Steel, Mike
2015-01-01
Autocatalytic sets are considered to be fundamental to the origin of life. Prior theoretical and computational work on the existence and properties of these sets has relied on a fast algorithm for detectingself-sustaining autocatalytic sets in chemical reaction systems. Here, we introduce and apply a modified version and several extensions of the basic algorithm: (i) a modification aimed at reducing the number of calls to the computationally most expensive part of the algorithm, (ii) the application of a previously introduced extension of the basic algorithm to sample the smallest possible autocatalytic sets within a reaction network, and the application of a statistical test which provides a probable lower bound on the number of such smallest sets, (iii) the introduction and application of another extension of the basic algorithm to detect autocatalytic sets in a reaction system where molecules can also inhibit (as well as catalyse) reactions, (iv) a further, more abstract, extension of the theory behind searching for autocatalytic sets. (i) The modified algorithm outperforms the original one in the number of calls to the computationally most expensive procedure, which, in some cases also leads to a significant improvement in overall running time, (ii) our statistical test provides strong support for the existence of very large numbers (even millions) of minimal autocatalytic sets in a well-studied polymer model, where these minimal sets share about half of their reactions on average, (iii) "uninhibited" autocatalytic sets can be found in reaction systems that allow inhibition, but their number and sizes depend on the level of inhibition relative to the level of catalysis. (i) Improvements in the overall running time when searching for autocatalytic sets can potentially be obtained by using a modified version of the algorithm, (ii) the existence of large numbers of minimal autocatalytic sets can have important consequences for the possible evolvability of
U.S. Department of Health & Human Services — The VSAC provides downloadable access to all official versions of vocabulary value sets contained in the 2014 Clinical Quality Measures (CQMs). Each value set...
Settings for Suicide Prevention
... Suicide Populations Racial/Ethnic Groups Older Adults Adolescents LGBT Military/Veterans Men Effective Prevention Comprehensive Approach Identify ... Based Prevention Settings American Indian/Alaska Native Settings Schools Colleges and Universities Primary Care Emergency Departments Behavioral ...
Barthel, D; Fischer, K I; Nolte, S; Otto, C; Meyrose, A-K; Reisinger, S; Dabs, M; Thyen, U; Klein, M; Muehlan, H; Ankermann, T; Walter, O; Rose, M; Ravens-Sieberer, U
2016-03-01
To describe the implementation process of a computer-adaptive test (CAT) for measuring health-related quality of life (HRQoL) of children and adolescents in two pediatric clinics in Germany. The study focuses on the feasibility and user experience with the Kids-CAT, particularly the patients' experience with the tool and the pediatricians' experience with the Kids-CAT Report. The Kids-CAT was completed by 312 children and adolescents with asthma, diabetes or rheumatoid arthritis. The test was applied during four clinical visits over a 1-year period. A feedback report with the test results was made available to the pediatricians. To assess both feasibility and acceptability, a multimethod research design was used. To assess the patients' experience with the tool, the children and adolescents completed a questionnaire. To assess the clinicians' experience, two focus groups were conducted with eight pediatricians. The children and adolescents indicated that the Kids-CAT was easy to complete. All pediatricians reported that the Kids-CAT was straightforward and easy to understand and integrate into clinical practice; they also expressed that routine implementation of the tool would be desirable and that the report was a valuable source of information, facilitating the assessment of self-reported HRQoL of their patients. The Kids-CAT was considered an efficient and valuable tool for assessing HRQoL in children and adolescents. The Kids-CAT Report promises to be a useful adjunct to standard clinical care with the potential to improve patient-physician communication, enabling pediatricians to evaluate and monitor their young patients' self-reported HRQoL.
Baker, Mark; Beltran, Jane; Buell, Jason; Conrey, Brian; Davis, Tom; Donaldson, Brianna; Detorre-Ozeki, Jeanne; Dibble, Leila; Freeman, Tom; Hammie, Robert; Montgomery, Julie; Pickford, Avery; Wong, Justine
2013-01-01
Sets in the game "Set" are lines in a certain four-dimensional space. Here we introduce planes into the game, leading to interesting mathematical questions, some of which we solve, and to a wonderful variation on the game "Set," in which every tableau of nine cards must contain at least one configuration for a player to pick up.
Suppes, Patrick
1972-01-01
This clear and well-developed approach to axiomatic set theory is geared toward upper-level undergraduates and graduate students. It examines the basic paradoxes and history of set theory and advanced topics such as relations and functions, equipollence, finite sets and cardinal numbers, rational and real numbers, and other subjects. 1960 edition.
DEFF Research Database (Denmark)
Rodríguez, J. Tinguaro; Franco de los Ríos, Camilo; Gómez, Daniel
2015-01-01
In this paper we want to stress the relevance of paired fuzzy sets, as already proposed in previous works of the authors, as a family of fuzzy sets that offers a unifying view for different models based upon the opposition of two fuzzy sets, simply allowing the existence of different types...
Ulmann, Bernd
2013-01-01
This book is a comprehensive introduction to analog computing. As most textbooks about this powerful computing paradigm date back to the 1960s and 1970s, it fills a void and forges a bridge from the early days of analog computing to future applications. The idea of analog computing is not new. In fact, this computing paradigm is nearly forgotten, although it offers a path to both high-speed and low-power computing, which are in even more demand now than they were back in the heyday of electronic analog computers.
DEFF Research Database (Denmark)
Vallgårda, Anna K. A.; Redström, Johan
2007-01-01
Computational composite is introduced as a new type of composite material. Arguing that this is not just a metaphorical maneuver, we provide an analysis of computational technology as material in design, which shows how computers share important characteristics with other materials used in design...... and architecture. We argue that the notion of computational composites provides a precise understanding of the computer as material, and of how computations need to be combined with other materials to come to expression as material. Besides working as an analysis of computers from a designer’s point of view......, the notion of computational composites may also provide a link for computer science and human-computer interaction to an increasingly rapid development and use of new materials in design and architecture....
Efficient processing of containment queries on nested sets
Ibrahim, A.; Fletcher, G.H.L.
2013-01-01
We study the problem of computing containment queries on sets which can have both atomic and set-valued objects as elements, i.e., nested sets. Containment is a fundamental query pattern with many basic applications. Our study of nested set containment is motivated by the ubiquity of nested data in
Nonlinear canonical correlation analysis with k sets of variables
van der Burg, Eeke; de Leeuw, Jan
1987-01-01
The multivariate technique OVERALS is introduced as a non-linear generalization of canonical correlation analysis (CCA). First, two sets CCA is introduced. Two sets CCA is a technique that computes linear combinations of sets of variables that correlate in an optimal way. Two sets CCA is then
Spatial Computing and Spatial Practices
DEFF Research Database (Denmark)
Brodersen, Anders; Büsher, Monika; Christensen, Michael
2007-01-01
The gathering momentum behind the research agendas of pervasive, ubiquitous and ambient computing, set in motion by Mark Weiser (1991), offer dramatic opportunities for information systems design. They raise the possibility of "putting computation where it belongs" by exploding computing power out...... the "disappearing computer" we have, therefore, carried over from previous research an interdisciplinary perspective, and a focus on the sociality of action (Suchman 1987)....
Enderton, Herbert B
1977-01-01
This is an introductory undergraduate textbook in set theory. In mathematics these days, essentially everything is a set. Some knowledge of set theory is necessary part of the background everyone needs for further study of mathematics. It is also possible to study set theory for its own interest--it is a subject with intruiging results anout simple objects. This book starts with material that nobody can do without. There is no end to what can be learned of set theory, but here is a beginning.
Scarani, Valerio
1998-01-01
The aim of this thesis was to explain what quantum computing is. The information for the thesis was gathered from books, scientific publications, and news articles. The analysis of the information revealed that quantum computing can be broken down to three areas: theories behind quantum computing explaining the structure of a quantum computer, known quantum algorithms, and the actual physical realizations of a quantum computer. The thesis reveals that moving from classical memor...
DEFF Research Database (Denmark)
Vatrapu, Ravi; Mukkamala, Raghava Rao; Hussain, Abid
2016-01-01
, conceptual and formal models of social data, and an analytical framework for combining big social data sets with organizational and societal data sets. Three empirical studies of big social data are presented to illustrate and demonstrate social set analysis in terms of fuzzy set-theoretical sentiment...... automata and agent-based modeling). However, when it comes to organizational and societal units of analysis, there exists no approach to conceptualize, model, analyze, explain, and predict social media interactions as individuals' associations with ideas, values, identities, and so on. To address...... analysis, crisp set-theoretical interaction analysis, and event-studies-oriented set-theoretical visualizations. Implications for big data analytics, current limitations of the set-theoretical approach, and future directions are outlined....
Counting convex polygons in planar point sets
Mitchell, J.S.B.; Rote, G.; Sundaram, Gopalakrishnan; Woeginger, G.J.
1995-01-01
Given a set S of n points in the plane, we compute in time O(n3) the total number of convex polygons whose vertices are a subset of S. We give an O(m · n3) algorithm for computing the number of convex k-gons with vertices in S, for all values k = 3,…, m; previously known bounds were exponential
Programming in biomolecular computation
DEFF Research Database (Denmark)
Hartmann, Lars Røeboe; Jones, Neil; Simonsen, Jakob Grue
2010-01-01
in a strong sense: a universal algorithm exists, that is able to execute any program, and is not asymptotically inefficient. A prototype model has been implemented (for now in silico on a conventional computer). This work opens new perspectives on just how computation may be specified at the biological level......., by programs reminiscent of low-level computer machine code; and at the same time biologically plausible: its functioning is defined by a single and relatively small set of chemical-like reaction rules. Further properties: the model is stored-program: programs are the same as data, so programs are not only...... executable, but are also compilable and interpretable. It is universal: all computable functions can be computed (in natural ways and without arcane encodings of data and algorithm); it is also uniform: new “hardware” is not needed to solve new problems; and (last but not least) it is Turing complete...
Neuroscience, brains, and computers
Directory of Open Access Journals (Sweden)
Giorno Maria Innocenti
2013-07-01
Full Text Available This paper addresses the role of the neurosciences in establishing what the brain is and how states of the brain relate to states of the mind. The brain is viewed as a computational deviceperforming operations on symbols. However, the brain is a special purpose computational devicedesigned by evolution and development for survival and reproduction, in close interaction with theenvironment. The hardware of the brain (its structure is very different from that of man-made computers.The computational style of the brain is also very different from traditional computers: the computationalalgorithms, instead of being sets of external instructions, are embedded in brain structure. Concerningthe relationships between brain and mind a number of questions lie ahead. One of them is why andhow, only the human brain grasped the notion of God, probably only at the evolutionary stage attainedby Homo sapiens.
Offline computing and networking
International Nuclear Information System (INIS)
Appel, J.A.; Avery, P.; Chartrand, G.
1985-01-01
This note summarizes the work of the Offline Computing and Networking Group. The report is divided into two sections; the first deals with the computing and networking requirements and the second with the proposed way to satisfy those requirements. In considering the requirements, we have considered two types of computing problems. The first is CPU-intensive activity such as production data analysis (reducing raw data to DST), production Monte Carlo, or engineering calculations. The second is physicist-intensive computing such as program development, hardware design, physics analysis, and detector studies. For both types of computing, we examine a variety of issues. These included a set of quantitative questions: how much CPU power (for turn-around and for through-put), how much memory, mass-storage, bandwidth, and so on. There are also very important qualitative issues: what features must be provided by the operating system, what tools are needed for program design, code management, database management, and for graphics
2007-01-01
The 2007 CERN School of Computing, organised by CERN in collaboration with the University of Split (FESB) will be held from 20 to 31 August 2007 in Dubrovnik, Croatia. It is aimed at postgraduate students and research workers with a few years' experience in scientific physics, computing or related fields. Special themes this year are: GRID Technologies: The Grid track delivers unique theoretical and hands-on education on some of the most advanced GRID topics; Software Technologies: The Software track addresses some of the most relevant modern techniques and tools for large scale distributed software development and handling as well as for computer security; Physics Computing: The Physics Computing track focuses on informatics topics specific to the HEP community. After setting-the-scene lectures, it addresses data acquisition and ROOT. Grants from the European Union Framework Programme 6 (FP6) are available to participants to cover part or all of the cost of the School. More information can be found at...
Using SETS to find minimal cut sets in large fault trees
International Nuclear Information System (INIS)
Worrell, R.B.; Stack, D.W.
1978-01-01
An efficient algebraic algorithm for finding the minimal cut sets for a large fault tree was defined and a new procedure which implements the algorithm was added to the Set Equation Transformation System (SETS). The algorithm includes the identification and separate processing of independent subtrees, the coalescing of consecutive gates of the same kind, the creation of additional independent subtrees, and the derivation of the fault tree stem equation in stages. The computer time required to determine the minimal cut sets using these techniques is shown to be substantially less than the computer time required to determine the minimal cut sets when these techniques are not employed. It is shown for a given example that the execution time required to determine the minimal cut sets can be reduced from 7,686 seconds to 7 seconds when all of these techniques are employed
DEFF Research Database (Denmark)
Nygaard, Jens Vinge
2017-01-01
The Health Technology Program at Aarhus University applies computational biology to investigate the heterogeneity of tumours......The Health Technology Program at Aarhus University applies computational biology to investigate the heterogeneity of tumours...
Indian Academy of Sciences (India)
A computing grid interconnects resources such as high performancecomputers, scientific databases, and computercontrolledscientific instruments of cooperating organizationseach of which is autonomous. It precedes and is quitedifferent from cloud computing, which provides computingresources by vendors to customers ...
Directory of Open Access Journals (Sweden)
K. Shalini
2013-01-01
Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.
Czech Academy of Sciences Publication Activity Database
Doležal, Martin; Rmoutil, M.; Vejnar, B.; Vlasák, V.
2016-01-01
Roč. 440, č. 2 (2016), s. 922-939 ISSN 0022-247X Institutional support: RVO:67985840 Keywords : Haar meager set * Haar null set * Polish group Subject RIV: BA - General Mathematics Impact factor: 1.064, year: 2016 http://www.sciencedirect.com/science/article/pii/S0022247X1600305X
Setting goals in psychotherapy
DEFF Research Database (Denmark)
Emiliussen, Jakob; Wagoner, Brady
2013-01-01
The present study is concerned with the ethical dilemmas of setting goals in therapy. The main questions that it aims to answer are: who is to set the goals for therapy and who is to decide when they have been reached? The study is based on four semi-‐structured, phenomenological interviews...
Barasz, Kate; John, Leslie K; Keenan, Elizabeth A; Norton, Michael I
2017-10-01
Pseudo-set framing-arbitrarily grouping items or tasks together as part of an apparent "set"-motivates people to reach perceived completion points. Pseudo-set framing changes gambling choices (Study 1), effort (Studies 2 and 3), giving behavior (Field Data and Study 4), and purchase decisions (Study 5). These effects persist in the absence of any reward, when a cost must be incurred, and after participants are explicitly informed of the arbitrariness of the set. Drawing on Gestalt psychology, we develop a conceptual account that predicts what will-and will not-act as a pseudo-set, and defines the psychological process through which these pseudo-sets affect behavior: over and above typical reference points, pseudo-set framing alters perceptions of (in)completeness, making intermediate progress seem less complete. In turn, these feelings of incompleteness motivate people to persist until the pseudo-set has been fulfilled. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Moschovakis, YN
1987-01-01
Now available in paperback, this monograph is a self-contained exposition of the main results and methods of descriptive set theory. It develops all the necessary background material from logic and recursion theory, and treats both classical descriptive set theory and the effective theory developed by logicians.
Directory of Open Access Journals (Sweden)
Shawkat Alkhazaleh
2011-01-01
Full Text Available We introduce the concept of possibility fuzzy soft set and its operation and study some of its properties. We give applications of this theory in solving a decision-making problem. We also introduce a similarity measure of two possibility fuzzy soft sets and discuss their application in a medical diagnosis problem.
Archaeological predictive model set.
2015-03-01
This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...
Leemans, I.B.; Broomhall, Susan
2017-01-01
Digital emotion research has yet to make history. Until now large data set mining has not been a very active ﬁeld of research in early modern emotion studies. This is indeed surprising since ﬁrst, the early modern ﬁeld has such rich, copyright-free, digitized data sets and second, emotion studies
Stroud, Wesley
2018-01-01
All educators want their classrooms to be inviting areas that support investigations. However, a common mistake is to fill learning spaces with items or objects that are set up by the teacher or are simply "for show." This type of setting, although it may create a comfortable space for students, fails to stimulate investigations and…
Quantum computers and quantum computations
International Nuclear Information System (INIS)
Valiev, Kamil' A
2005-01-01
This review outlines the principles of operation of quantum computers and their elements. The theory of ideal computers that do not interact with the environment and are immune to quantum decohering processes is presented. Decohering processes in quantum computers are investigated. The review considers methods for correcting quantum computing errors arising from the decoherence of the state of the quantum computer, as well as possible methods for the suppression of the decohering processes. A brief enumeration of proposed quantum computer realizations concludes the review. (reviews of topical problems)
Quantum Computing for Computer Architects
Metodi, Tzvetan
2011-01-01
Quantum computers can (in theory) solve certain problems far faster than a classical computer running any known classical algorithm. While existing technologies for building quantum computers are in their infancy, it is not too early to consider their scalability and reliability in the context of the design of large-scale quantum computers. To architect such systems, one must understand what it takes to design and model a balanced, fault-tolerant quantum computer architecture. The goal of this lecture is to provide architectural abstractions for the design of a quantum computer and to explore
Silvis-Cividjian, N.
This book provides a concise introduction to Pervasive Computing, otherwise known as Internet of Things (IoT) and Ubiquitous Computing (Ubicomp) which addresses the seamless integration of computing systems within everyday objects. By introducing the core topics and exploring assistive pervasive
Wechsler, Harry
1990-01-01
The book is suitable for advanced courses in computer vision and image processing. In addition to providing an overall view of computational vision, it contains extensive material on topics that are not usually covered in computer vision texts (including parallel distributed processing and neural networks) and considers many real applications.
2003-12-01
Computation and today’s microprocessors with the approach to operating system architecture, and the controversy between microkernels and monolithic kernels...Both Spatial Computation and microkernels break away a relatively monolithic architecture into in- dividual lightweight pieces, well specialized...for their particular functionality. Spatial Computation removes global signals and control, in the same way microkernels remove the global address
Economic communication model set
Zvereva, Olga M.; Berg, Dmitry B.
2017-06-01
This paper details findings from the research work targeted at economic communications investigation with agent-based models usage. The agent-based model set was engineered to simulate economic communications. Money in the form of internal and external currencies was introduced into the models to support exchanges in communications. Every model, being based on the general concept, has its own peculiarities in algorithm and input data set since it was engineered to solve the specific problem. Several and different origin data sets were used in experiments: theoretic sets were estimated on the basis of static Leontief's equilibrium equation and the real set was constructed on the basis of statistical data. While simulation experiments, communication process was observed in dynamics, and system macroparameters were estimated. This research approved that combination of an agent-based and mathematical model can cause a synergetic effect.
Molchanov, Ilya
2017-01-01
This monograph, now in a thoroughly revised second edition, offers the latest research on random sets. It has been extended to include substantial developments achieved since 2005, some of them motivated by applications of random sets to econometrics and finance. The present volume builds on the foundations laid by Matheron and others, including the vast advances in stochastic geometry, probability theory, set-valued analysis, and statistical inference. It shows the various interdisciplinary relationships of random set theory within other parts of mathematics, and at the same time fixes terminology and notation that often vary in the literature, establishing it as a natural part of modern probability theory and providing a platform for future development. It is completely self-contained, systematic and exhaustive, with the full proofs that are necessary to gain insight. Aimed at research level, Theory of Random Sets will be an invaluable reference for probabilists; mathematicians working in convex and integ...
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 11. Know Your Personal Computer 5. The CPU Base Instruction Set and Assembly Language Programming. Siddhartha Kumar Ghoshal ... Supercomputer Education and Research Centre, Indian Institute of Science, Bangalore 560 012, India ...
1982-01-01
Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn
CERN. Geneva
2008-01-01
What if people could play computer games and accomplish work without even realizing it? What if billions of people collaborated to solve important problems for humanity or generate training data for computers? My work aims at a general paradigm for doing exactly that: utilizing human processing power to solve computational problems in a distributed manner. In particular, I focus on harnessing human time and energy for addressing problems that computers cannot yet solve. Although computers have advanced dramatically in many respects over the last 50 years, they still do not possess the basic conceptual intelligence or perceptual capabilities...
International Nuclear Information System (INIS)
Deutsch, D.
1992-01-01
As computers become ever more complex, they inevitably become smaller. This leads to a need for components which are fabricated and operate on increasingly smaller size scales. Quantum theory is already taken into account in microelectronics design. This article explores how quantum theory will need to be incorporated into computers in future in order to give them their components functionality. Computation tasks which depend on quantum effects will become possible. Physicists may have to reconsider their perspective on computation in the light of understanding developed in connection with universal quantum computers. (UK)
Computer Technology for Industry
1979-01-01
In this age of the computer, more and more business firms are automating their operations for increased efficiency in a great variety of jobs, from simple accounting to managing inventories, from precise machining to analyzing complex structures. In the interest of national productivity, NASA is providing assistance both to longtime computer users and newcomers to automated operations. Through a special technology utilization service, NASA saves industry time and money by making available already developed computer programs which have secondary utility. A computer program is essentially a set of instructions which tells the computer how to produce desired information or effect by drawing upon its stored input. Developing a new program from scratch can be costly and time-consuming. Very often, however, a program developed for one purpose can readily be adapted to a totally different application. To help industry take advantage of existing computer technology, NASA operates the Computer Software Management and Information Center (COSMIC)(registered TradeMark),located at the University of Georgia. COSMIC maintains a large library of computer programs developed for NASA, the Department of Defense, the Department of Energy and other technology-generating agencies of the government. The Center gets a continual flow of software packages, screens them for adaptability to private sector usage, stores them and informs potential customers of their availability.
Applications of interval computations
Kreinovich, Vladik
1996-01-01
Primary Audience for the Book • Specialists in numerical computations who are interested in algorithms with automatic result verification. • Engineers, scientists, and practitioners who desire results with automatic verification and who would therefore benefit from the experience of suc cessful applications. • Students in applied mathematics and computer science who want to learn these methods. Goal Of the Book This book contains surveys of applications of interval computations, i. e. , appli cations of numerical methods with automatic result verification, that were pre sented at an international workshop on the subject in EI Paso, Texas, February 23-25, 1995. The purpose of this book is to disseminate detailed and surveyed information about existing and potential applications of this new growing field. Brief Description of the Papers At the most fundamental level, interval arithmetic operations work with sets: The result of a single arithmetic operation is the set of all possible results as the o...
Rosenthal, L E
1986-10-01
Software is the component in a computer system that permits the hardware to perform the various functions that a computer system is capable of doing. The history of software and its development can be traced to the early nineteenth century. All computer systems are designed to utilize the "stored program concept" as first developed by Charles Babbage in the 1850s. The concept was lost until the mid-1940s, when modern computers made their appearance. Today, because of the complex and myriad tasks that a computer system can perform, there has been a differentiation of types of software. There is software designed to perform specific business applications. There is software that controls the overall operation of a computer system. And there is software that is designed to carry out specialized tasks. Regardless of types, software is the most critical component of any computer system. Without it, all one has is a collection of circuits, transistors, and silicone chips.
Smith, Paul H.
1988-01-01
The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.
Levy, Azriel
2002-01-01
An advanced-level treatment of the basics of set theory, this text offers students a firm foundation, stopping just short of the areas employing model-theoretic methods. Geared toward upper-level undergraduate and graduate students, it consists of two parts: the first covers pure set theory, including the basic motions, order and well-foundedness, cardinal numbers, the ordinals, and the axiom of choice and some of it consequences; the second deals with applications and advanced topics such as point set topology, real spaces, Boolean algebras, and infinite combinatorics and large cardinals. An
Milewski, Emil G
2012-01-01
REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Set Theory includes elementary logic, sets, relations, functions, denumerable and non-denumerable sets, cardinal numbers, Cantor's theorem, axiom of choice, and order relations.
Bioinformatics and Computational Core Technology Center
Federal Laboratory Consortium — SERVICES PROVIDED BY THE COMPUTER CORE FACILITYEvaluation, purchase, set up, and maintenance of the computer hardware and network for the 170 users in the research...
International Nuclear Information System (INIS)
Balashov, V.K.
1991-01-01
The structure of the software for computer graphics at VAX JINR is described. It consists of graphical packages GKS, WAND and a set graphicals packages for High Energy Physics application designed at CERN. 17 refs.; 1 tab
Computing fundamentals digital literacy edition
Wempen, Faithe
2014-01-01
Computing Fundamentals has been tailor made to help you get up to speed on your Computing Basics and help you get proficient in entry level computing skills. Covering all the key topics, it starts at the beginning and takes you through basic set-up so that you'll be competent on a computer in no time.You'll cover: Computer Basics & HardwareSoftwareIntroduction to Windows 7Microsoft OfficeWord processing with Microsoft Word 2010Creating Spreadsheets with Microsoft ExcelCreating Presentation Graphics with PowerPointConnectivity and CommunicationWeb BasicsNetwork and Internet Privacy and Securit
Lebesgue Sets Immeasurable Existence
Directory of Open Access Journals (Sweden)
Diana Marginean Petrovai
2012-12-01
Full Text Available It is well known that the notion of measure and integral were released early enough in close connection with practical problems of measuring of geometric ﬁgures. Notion of measure was outlined in the early 20th century through H. Lebesgue’s research, founder of the modern theory of measure and integral. It was developed concurrently a technique of integration of functions. Gradually it was formed a speciﬁc area todaycalled the measure and integral theory. Essential contributions to building this theory was made by a large number of mathematicians: C. Carathodory, J. Radon, O. Nikodym, S. Bochner, J. Pettis, P. Halmos and many others. In the following we present several abstract sets, classes of sets. There exists the sets which are not Lebesgue measurable and the sets which are Lebesgue measurable but are not Borel measurable. Hence B ⊂ L ⊂ P(X.
DEFF Research Database (Denmark)
Thude, Bettina Ravnborg; Stenager, Egon; von Plessen, Christian
2018-01-01
. Findings: The study found that the leadership set-up did not have any clear influence on interdisciplinary cooperation, as all wards had a high degree of interdisciplinary cooperation independent of which leadership set-up they had. Instead, the authors found a relation between leadership set-up and leader...... could influence legitimacy. Originality/value: The study shows that leadership set-up is not the predominant factor that creates interdisciplinary cooperation; but rather, leader legitimacy also should be considered. Additionally, the study shows that leader legitimacy can be difficult to establish...... and that it cannot be taken for granted. This is something chief executive officers should bear in mind when they plan and implement new leadership structures. Therefore, it would also be useful to look more closely at how to achieve legitimacy in cases where the leader is from a different profession to the staff....
General Paleoclimatology Data Sets
National Oceanic and Atmospheric Administration, Department of Commerce — Data of past climate and environment derived from unusual proxy evidence. Parameter keywords describe what was measured in this data set. Additional summary...
U.S. Department of Health & Human Services — The Healthcare Effectiveness Data and Information Set (HEDIS) is a tool used by more than 90 percent of Americas health plans to measure performance on important...
Architecture of 32 bit CISC (Complex Instruction Set Computer) microprocessors
International Nuclear Information System (INIS)
Jove, T.M.; Ayguade, E.; Valero, M.
1988-01-01
In this paper we describe the main topics about the architecture of the best known 32-bit CISC microprocessors; i80386, MC68000 family, NS32000 series and Z80000. We focus on the high level languages support, operating system design facilities, memory management, techniques to speed up the overall performance and program debugging facilities. (Author)
On the Computation of Finite Invariant Sets of Mappings.
1988-02-01
for the calculation of such invariant cycles. We refer here only to Doedel [1], looss et al [3], Kevrekidis et al [4], Van Veldhuizen ,[6], where further... van Veldhuizen , On Polygonal Approximations of an Invariant Curve, Dept.of Mathem. and Comp. Science, Vrije Universiteit Amsterdam, Techni- cal Report 1987, Math. Comp. to appear DATE Fl .LMED ...of van der Pol’s equation " x2) x - A(l - x ) X’ + x - 0 (16) As shown, for example in [2], the solution satisfies x - 2 cos(wt)+ A (0.75 sin(wt
On Time with Minimal Expected Cost!
DEFF Research Database (Denmark)
David, Alexandre; Jensen, Peter Gjøl; Larsen, Kim Guldstrand
2014-01-01
(Priced) timed games are two-player quantitative games involving an environment assumed to be completely antogonistic. Classical analysis consists in the synthesis of strategies ensuring safety, time-bounded or cost-bounded reachability objectives. Assuming a randomized environment, the (priced......) timed game essentially defines an infinite-state Markov (reward) decision proces. In this setting the objective is classically to find a strategy that will minimize the expected reachability cost, but with no guarantees on worst-case behaviour. In this paper, we provide efficient methods for computing...... reachability strategies that will both ensure worst case time-bounds as well as provide (near-) minimal expected cost. Our method extends the synthesis algorithms of the synthesis tool Uppaal-Tiga with suitable adapted reinforcement learning techniques, that exhibits several orders of magnitude improvements w...
Activity-Driven Computing Infrastructure - Pervasive Computing in Healthcare
DEFF Research Database (Denmark)
Bardram, Jakob Eyvind; Christensen, Henrik Bærbak; Olesen, Anders Konring
In many work settings, and especially in healthcare, work is distributed among many cooperating actors, who are constantly moving around and are frequently interrupted. In line with other researchers, we use the term pervasive computing to describe a computing infrastructure that supports work...
Frontiers of higher order fuzzy sets
Tahayori, Hooman
2015-01-01
Frontiers of Higher Order Fuzzy Sets, strives to improve the theoretical aspects of general and Interval Type-2 fuzzy sets and provides a unified representation theorem for higher order fuzzy sets. Moreover, the book elaborates on the concept of gradual elements and their integration with the higher order fuzzy sets. This book also introduces new frameworks for information granulation based on general T2FSs, IT2FSs, Gradual elements, Shadowed sets and rough sets. In particular, the properties and characteristics of the new proposed frameworks are studied. Such new frameworks are shown to be more capable to be exploited in real applications. Higher order fuzzy sets that are the result of the integration of general T2FSs, IT2FSs, gradual elements, shadowed sets and rough sets will be shown to be suitable to be applied in the fields of bioinformatics, business, management, ambient intelligence, medicine, cloud computing and smart grids. Presents new variations of fuzzy set frameworks and new areas of applicabili...
Setting conservation priorities.
Wilson, Kerrie A; Carwardine, Josie; Possingham, Hugh P
2009-04-01
A generic framework for setting conservation priorities based on the principles of classic decision theory is provided. This framework encapsulates the key elements of any problem, including the objective, the constraints, and knowledge of the system. Within the context of this framework the broad array of approaches for setting conservation priorities are reviewed. While some approaches prioritize assets or locations for conservation investment, it is concluded here that prioritization is incomplete without consideration of the conservation actions required to conserve the assets at particular locations. The challenges associated with prioritizing investments through time in the face of threats (and also spatially and temporally heterogeneous costs) can be aided by proper problem definition. Using the authors' general framework for setting conservation priorities, multiple criteria can be rationally integrated and where, how, and when to invest conservation resources can be scheduled. Trade-offs are unavoidable in priority setting when there are multiple considerations, and budgets are almost always finite. The authors discuss how trade-offs, risks, uncertainty, feedbacks, and learning can be explicitly evaluated within their generic framework for setting conservation priorities. Finally, they suggest ways that current priority-setting approaches may be improved.
Computational Chemistry Comparison and Benchmark Database
SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access) The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.
Computer programming and computer systems
Hassitt, Anthony
1966-01-01
Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten
Workshop on Computational Optimization
2016-01-01
This volume is a comprehensive collection of extended contributions from the Workshop on Computational Optimization 2014, held at Warsaw, Poland, September 7-10, 2014. The book presents recent advances in computational optimization. The volume includes important real problems like parameter settings for controlling processes in bioreactor and other processes, resource constrained project scheduling, infection distribution, molecule distance geometry, quantum computing, real-time management and optimal control, bin packing, medical image processing, localization the abrupt atmospheric contamination source and so on. It shows how to develop algorithms for them based on new metaheuristic methods like evolutionary computation, ant colony optimization, constrain programming and others. This research demonstrates how some real-world problems arising in engineering, economics, medicine and other domains can be formulated as optimization tasks.
Reliable computation from contextual correlations
Oestereich, André L.; Galvão, Ernesto F.
2017-12-01
An operational approach to the study of computation based on correlations considers black boxes with one-bit inputs and outputs, controlled by a limited classical computer capable only of performing sums modulo-two. In this setting, it was shown that noncontextual correlations do not provide any extra computational power, while contextual correlations were found to be necessary for the deterministic evaluation of nonlinear Boolean functions. Here we investigate the requirements for reliable computation in this setting; that is, the evaluation of any Boolean function with success probability bounded away from 1 /2 . We show that bipartite CHSH quantum correlations suffice for reliable computation. We also prove that an arbitrarily small violation of a multipartite Greenberger-Horne-Zeilinger noncontextuality inequality also suffices for reliable computation.
Design of the RISC-V Instruction Set Architecture
Waterman, Andrew Shell
2016-01-01
The hardware-software interface, embodied in the instruction set architecture (ISA), is arguably the most important interface in a computer system. Yet, in contrast to nearly all other interfaces in a modern computer system, all commercially popular ISAs are proprietary. A free and open ISA standard has the potential to increase innovation in microprocessor design, reduce computer system cost, and, as Moore’s law wanes, ease the transition to more specialized computational devices.In this d...
Würtz, Rolf P
2008-01-01
Organic Computing is a research field emerging around the conviction that problems of organization in complex systems in computer science, telecommunications, neurobiology, molecular biology, ethology, and possibly even sociology can be tackled scientifically in a unified way. From the computer science point of view, the apparent ease in which living systems solve computationally difficult problems makes it inevitable to adopt strategies observed in nature for creating information processing machinery. In this book, the major ideas behind Organic Computing are delineated, together with a sparse sample of computational projects undertaken in this new field. Biological metaphors include evolution, neural networks, gene-regulatory networks, networks of brain modules, hormone system, insect swarms, and ant colonies. Applications are as diverse as system design, optimization, artificial growth, task allocation, clustering, routing, face recognition, and sign language understanding.
Continuation of Sets of Constrained Orbit Segments
DEFF Research Database (Denmark)
Schilder, Frank; Brøns, Morten; Chamoun, George Chaouki
Sets of constrained orbit segments of time continuous flows are collections of trajectories that represent a whole or parts of an invariant set. A non-trivial but simple example is a homoclinic orbit. A typical representation of this set consists of an equilibrium point of the flow and a trajectory...... that starts close and returns close to this fixed point within finite time. More complicated examples are hybrid periodic orbits of piecewise smooth systems or quasi-periodic invariant tori. Even though it is possible to define generalised two-point boundary value problems for computing sets of constrained...... orbit segments, this is very disadvantageous in practice. In this talk we will present an algorithm that allows the efficient continuation of sets of constrained orbit segments together with the solution of the full variational problem....
International Nuclear Information System (INIS)
Ethier, C.R.
2004-01-01
Computational biomechanics is a fast-growing field that integrates modern biological techniques and computer modelling to solve problems of medical and biological interest. Modelling of blood flow in the large arteries is the best-known application of computational biomechanics, but there are many others. Described here is work being carried out in the laboratory on the modelling of blood flow in the coronary arteries and on the transport of viral particles in the eye. (author)
Computers and clinical arrhythmias.
Knoebel, S B; Lovelace, D E
1983-02-01
Cardiac arrhythmias are ubiquitous in normal and abnormal hearts. These disorders may be life-threatening or benign, symptomatic or unrecognized. Arrhythmias may be the precursor of sudden death, a cause or effect of cardiac failure, a clinical reflection of acute or chronic disorders, or a manifestation of extracardiac conditions. Progress is being made toward unraveling the diagnostic and therapeutic problems involved in arrhythmogenesis. Many of the advances would not be possible, however, without the availability of computer technology. To preserve the proper balance and purposeful progression of computer usage, engineers and physicians have been exhorted not to work independently in this field. Both should learn some of the other's trade. The two disciplines need to come together to solve important problems with computers in cardiology. The intent of this article was to acquaint the practicing cardiologist with some of the extant and envisioned computer applications and some of the problems with both. We conclude that computer-based database management systems are necessary for sorting out the clinical factors of relevance for arrhythmogenesis, but computer database management systems are beset with problems that will require sophisticated solutions. The technology for detecting arrhythmias on routine electrocardiograms is quite good but human over-reading is still required, and the rationale for computer application in this setting is questionable. Systems for qualitative, continuous monitoring and review of extended time ECG recordings are adequate with proper noise rejection algorithms and editing capabilities. The systems are limited presently for clinical application to the recognition of ectopic rhythms and significant pauses. Attention should now be turned to the clinical goals for detection and quantification of arrhythmias. We should be asking the following questions: How quantitative do systems need to be? Are computers required for the detection of
Rough sets selected methods and applications in management and engineering
Peters, Georg; Ślęzak, Dominik; Yao, Yiyu
2012-01-01
Introduced in the early 1980s, Rough Set Theory has become an important part of soft computing in the last 25 years. This book provides a practical, context-based analysis of rough set theory, with each chapter exploring a real-world application of Rough Sets.
Polyomino Problems to Confuse Computers
Coffin, Stewart
2009-01-01
Computers are very good at solving certain types combinatorial problems, such as fitting sets of polyomino pieces into square or rectangular trays of a given size. However, most puzzle-solving programs now in use assume orthogonal arrangements. When one departs from the usual square grid layout, complications arise. The author--using a computer,…
Computed laminography and reconstruction algorithm
International Nuclear Information System (INIS)
Que Jiemin; Cao Daquan; Zhao Wei; Tang Xiao
2012-01-01
Computed laminography (CL) is an alternative to computed tomography if large objects are to be inspected with high resolution. This is especially true for planar objects. In this paper, we set up a new scanning geometry for CL, and study the algebraic reconstruction technique (ART) for CL imaging. We compare the results of ART with variant weighted functions by computer simulation with a digital phantom. It proves that ART algorithm is a good choice for the CL system. (authors)
Computational Design of Urban Layouts
Wonka, Peter
2015-10-07
A fundamental challenge in computational design is to compute layouts by arranging a set of shapes. In this talk I will present recent urban modeling projects with applications in computer graphics, urban planning, and architecture. The talk will look at different scales of urban modeling (streets, floorplans, parcels). A common challenge in all these modeling problems are functional and aesthetic constraints that should be respected. The talk also highlights interesting links to geometry processing problems, such as field design and quad meshing.
DEFF Research Database (Denmark)
Vallgårda, Anna K. A.
to understand the computer as a material like any other material we would use for design, like wood, aluminum, or plastic. That as soon as the computer forms a composition with other materials it becomes just as approachable and inspiring as other smart materials. I present a series of investigations of what...... Computational Composite, and Telltale). Through the investigations, I show how the computer can be understood as a material and how it partakes in a new strand of materials whose expressions come to be in context. I uncover some of their essential material properties and potential expressions. I develop a way...
Computing with synthetic protocells.
Courbet, Alexis; Molina, Franck; Amar, Patrick
2015-09-01
In this article we present a new kind of computing device that uses biochemical reactions networks as building blocks to implement logic gates. The architecture of a computing machine relies on these generic and composable building blocks, computation units, that can be used in multiple instances to perform complex boolean functions. Standard logical operations are implemented by biochemical networks, encapsulated and insulated within synthetic vesicles called protocells. These protocells are capable of exchanging energy and information with each other through transmembrane electron transfer. In the paradigm of computation we propose, protoputing, a machine can solve only one problem and therefore has to be built specifically. Thus, the programming phase in the standard computing paradigm is represented in our approach by the set of assembly instructions (specific attachments) that directs the wiring of the protocells that constitute the machine itself. To demonstrate the computing power of protocellular machines, we apply it to solve a NP-complete problem, known to be very demanding in computing power, the 3-SAT problem. We show how to program the assembly of a machine that can verify the satisfiability of a given boolean formula. Then we show how to use the massive parallelism of these machines to verify in less than 20 min all the valuations of the input variables and output a fluorescent signal when the formula is satisfiable or no signal at all otherwise.
Revitalizing the setting approach
DEFF Research Database (Denmark)
Bloch, Paul; Toft, Ulla; Reinbach, Helene Christine
2014-01-01
BackgroundThe concept of health promotion rests on aspirations aiming at enabling people to increase control over and improve their health. Health promotion action is facilitated in settings such as schools, homes and work places. As a contribution to the promotion of healthy lifestyles, we have ...... approach is based on ecological and whole-systems thinking, and stipulates important principles and values of integration, participation, empowerment, context and knowledge-based development....... further developed the setting approach in an effort to harmonise it with contemporary realities (and complexities) of health promotion and public health action. The paper introduces a modified concept, the supersetting approach, which builds on the optimised use of diverse and valuable resources embedded...... in local community settings and on the strengths of social interaction and local ownership as drivers of change processes. Interventions based on a supersetting approach are first and foremost characterised by being integrated, but also participatory, empowering, context-sensitive and knowledge...
Stoll, Robert R
1979-01-01
Set Theory and Logic is the result of a course of lectures for advanced undergraduates, developed at Oberlin College for the purpose of introducing students to the conceptual foundations of mathematics. Mathematics, specifically the real number system, is approached as a unity whose operations can be logically ordered through axioms. One of the most complex and essential of modern mathematical innovations, the theory of sets (crucial to quantum mechanics and other sciences), is introduced in a most careful concept manner, aiming for the maximum in clarity and stimulation for further study in
Nonmeasurable sets and functions
Kharazishvili, Alexander
2004-01-01
The book is devoted to various constructions of sets which are nonmeasurable with respect to invariant (more generally, quasi-invariant) measures. Our starting point is the classical Vitali theorem stating the existence of subsets of the real line which are not measurable in the Lebesgue sense. This theorem stimulated the development of the following interesting topics in mathematics:1. Paradoxical decompositions of sets in finite-dimensional Euclidean spaces;2. The theory of non-real-valued-measurable cardinals;3. The theory of invariant (quasi-invariant)extensions of invariant (quasi-invaria
Directory of Open Access Journals (Sweden)
Décio Krause
2002-11-01
Full Text Available Quasi-set theory was developed to deal with collections of indistinguishable objects. In standard mathematics, there are no such kind of entities, for indistinguishability (agreement with respect to all properties entails numerical identity. The main motivation underlying such a theory is of course quantum physics, for collections of indistinguishable (’identical’ in the physicists’ jargon particles cannot be regarded as ’sets’ of standard set theories, which are collections of distinguishable objects. In this paper, a rationale for the development of such a theory is presented, motivated by Heinz Post’s claim that indistinguishability ofquantum entities should be attributed ’right at the start’.
Anderson, Ian
2011-01-01
Coherent treatment provides comprehensive view of basic methods and results of the combinatorial study of finite set systems. The Clements-Lindstrom extension of the Kruskal-Katona theorem to multisets is explored, as is the Greene-Kleitman result concerning k-saturated chain partitions of general partially ordered sets. Connections with Dilworth's theorem, the marriage problem, and probability are also discussed. Each chapter ends with a helpful series of exercises and outline solutions appear at the end. ""An excellent text for a topics course in discrete mathematics."" - Bulletin of the Ame
Locating hardware faults in a parallel computer
Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.
2010-04-13
Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.
Directory of Open Access Journals (Sweden)
BOGDAN OANCEA
2012-05-01
Full Text Available Since the first idea of using GPU to general purpose computing, things have evolved over the years and now there are several approaches to GPU programming. GPU computing practically began with the introduction of CUDA (Compute Unified Device Architecture by NVIDIA and Stream by AMD. These are APIs designed by the GPU vendors to be used together with the hardware that they provide. A new emerging standard, OpenCL (Open Computing Language tries to unify different GPU general computing API implementations and provides a framework for writing programs executed across heterogeneous platforms consisting of both CPUs and GPUs. OpenCL provides parallel computing using task-based and data-based parallelism. In this paper we will focus on the CUDA parallel computing architecture and programming model introduced by NVIDIA. We will present the benefits of the CUDA programming model. We will also compare the two main approaches, CUDA and AMD APP (STREAM and the new framwork, OpenCL that tries to unify the GPGPU computing models.
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 5; Issue 9. Quantum Computing - Building Blocks of a Quantum Computer. C S Vijay Vishal Gupta. General Article Volume 5 Issue 9 September 2000 pp 69-81. Fulltext. Click here to view fulltext PDF. Permanent link:
2002-01-01
"Platform Computing releases first grid-enabled workload management solution for IBM eServer Intel and UNIX high performance computing clusters. This Out-of-the-box solution maximizes the performance and capability of applications on IBM HPC clusters" (1/2 page) .
Indian Academy of Sciences (India)
In the first part of this article, we had looked at how quantum physics can be harnessed to make the building blocks of a quantum computer. In this concluding part, we look at algorithms which can exploit the power of this computational device, and some practical difficulties in building such a device. Quantum Algorithms.
Burba, M.; Lapitskaya, T.
2017-01-01
This article gives an elementary introduction to quantum computing. It is a draft for a book chapter of the "Handbook of Nature-Inspired and Innovative Computing", Eds. A. Zomaya, G.J. Milburn, J. Dongarra, D. Bader, R. Brent, M. Eshaghian-Wilner, F. Seredynski (Springer, Berlin Heidelberg New York, 2006).
Louis, David N.; Feldman, Michael; Carter, Alexis B.; Dighe, Anand S.; Pfeifer, John D.; Bry, Lynn; Almeida, Jonas S.; Saltz, Joel; Braun, Jonathan; Tomaszewski, John E.; Gilbertson, John R.; Sinard, John H.; Gerber, Georg K.; Galli, Stephen J.; Golden, Jeffrey A.; Becich, Michael J.
2016-01-01
Context We define the scope and needs within the new discipline of computational pathology, a discipline critical to the future of both the practice of pathology and, more broadly, medical practice in general. Objective To define the scope and needs of computational pathology. Data Sources A meeting was convened in Boston, Massachusetts, in July 2014 prior to the annual Association of Pathology Chairs meeting, and it was attended by a variety of pathologists, including individuals highly invested in pathology informatics as well as chairs of pathology departments. Conclusions The meeting made recommendations to promote computational pathology, including clearly defining the field and articulating its value propositions; asserting that the value propositions for health care systems must include means to incorporate robust computational approaches to implement data-driven methods that aid in guiding individual and population health care; leveraging computational pathology as a center for data interpretation in modern health care systems; stating that realizing the value proposition will require working with institutional administrations, other departments, and pathology colleagues; declaring that a robust pipeline should be fostered that trains and develops future computational pathologists, for those with both pathology and non-pathology backgrounds; and deciding that computational pathology should serve as a hub for data-related research in health care systems. The dissemination of these recommendations to pathology and bioinformatics departments should help facilitate the development of computational pathology. PMID:26098131
DEFF Research Database (Denmark)
Krogh, Simon
2013-01-01
with technological changes, the paradigmatic pendulum has swung between increased centralization on one side and a focus on distributed computing that pushes IT power out to end users on the other. With the introduction of outsourcing and cloud computing, centralization in large data centers is again dominating...... the IT scene. In line with the views presented by Nicolas Carr in 2003 (Carr, 2003), it is a popular assumption that cloud computing will be the next utility (like water, electricity and gas) (Buyya, Yeo, Venugopal, Broberg, & Brandic, 2009). However, this assumption disregards the fact that most IT production......), for instance, in establishing and maintaining trust between the involved parties (Sabherwal, 1999). So far, research in cloud computing has neglected this perspective and focused entirely on aspects relating to technology, economy, security and legal questions. While the core technologies of cloud computing (e...
R.P. Faber (Riemer)
2010-01-01
textabstractThis thesis studies price data and tries to unravel the underlying economic processes of why firms have chosen these prices. It focuses on three aspects of price setting. First, it studies whether the existence of a suggested price has a coordinating effect on the prices of firms.
Cobham recursive set functions
Czech Academy of Sciences Publication Activity Database
Beckmann, A.; Buss, S.; Friedman, S.-D.; Müller, M.; Thapen, Neil
2016-01-01
Roč. 167, č. 3 (2016), s. 335-369 ISSN 0168-0072 R&D Projects: GA ČR GBP202/12/G061 Institutional support: RVO:67985840 Keywords : set function * polynomial time * Cobham recursion Subject RIV: BA - General Mathematics Impact factor: 0.647, year: 2016 http://www.sciencedirect.com/science/article/pii/S0168007215001293
Marietta Schupp, EMBL Photolab
2008-01-01
Dr Sabine Hentze, specialist in human genetics, giving an Insight Lecture entitled "Human Genetics – Diagnostics, Indications and Ethical Issues" on 23 September 2008 at EMBL Heidelberg. Activities in a achool in Budapest during a visit of Angela Bekesi, Ambassadors for the SET-Routes programme.
Greenslade, Thomas B., Jr.
2014-01-01
In past issues of this journal, the late H. R. Crane wrote a long series of articles under the running title of "How Things Work." In them, Dick dealt with many questions that physics teachers asked themselves, but did not have the time to answer. This article is my attempt to work through the physics of the crystal set, which I thought…
DEFF Research Database (Denmark)
Jensen, Rune Møller; Veloso, Manuela M.; Bryant, Randal E.
2008-01-01
In this article, we present a framework called state-set branching that combines symbolic search based on reduced ordered Binary Decision Diagrams (BDDs) with best-first search, such as A* and greedy best-first search. The framework relies on an extension of these algorithms from expanding a sing...
Therapists in Oncology Settings
Hendrick, Susan S.
2013-01-01
This article describes the author's experiences of working with cancer patients/survivors both individually and in support groups for many years, across several settings. It also documents current best-practice guidelines for the psychosocial treatment of cancer patients/survivors and their families. The author's view of the important qualities…
Directory of Open Access Journals (Sweden)
Paul M. Torrens
2016-09-01
Full Text Available Streetscapes have presented a long-standing interest in many fields. Recently, there has been a resurgence of attention on streetscape issues, catalyzed in large part by computing. Because of computing, there is more understanding, vistas, data, and analysis of and on streetscape phenomena than ever before. This diversity of lenses trained on streetscapes permits us to address long-standing questions, such as how people use information while mobile, how interactions with people and things occur on streets, how we might safeguard crowds, how we can design services to assist pedestrians, and how we could better support special populations as they traverse cities. Amid each of these avenues of inquiry, computing is facilitating new ways of posing these questions, particularly by expanding the scope of what-if exploration that is possible. With assistance from computing, consideration of streetscapes now reaches across scales, from the neurological interactions that form among place cells in the brain up to informatics that afford real-time views of activity over whole urban spaces. For some streetscape phenomena, computing allows us to build realistic but synthetic facsimiles in computation, which can function as artificial laboratories for testing ideas. In this paper, I review the domain science for studying streetscapes from vantages in physics, urban studies, animation and the visual arts, psychology, biology, and behavioral geography. I also review the computational developments shaping streetscape science, with particular emphasis on modeling and simulation as informed by data acquisition and generation, data models, path-planning heuristics, artificial intelligence for navigation and way-finding, timing, synthetic vision, steering routines, kinematics, and geometrical treatment of collision detection and avoidance. I also discuss the implications that the advances in computing streetscapes might have on emerging developments in cyber
Computing in high energy physics
Energy Technology Data Exchange (ETDEWEB)
Smith, Sarah; Devenish, Robin [Nuclear Physics Laboratory, Oxford University (United Kingdom)
1989-07-15
Computing in high energy physics has changed over the years from being something one did on a slide-rule, through early computers, then a necessary evil to the position today where computers permeate all aspects of the subject from control of the apparatus to theoretical lattice gauge calculations. The state of the art, as well as new trends and hopes, were reflected in this year's 'Computing In High Energy Physics' conference held in the dreamy setting of Oxford's spires. The conference aimed to give a comprehensive overview, entailing a heavy schedule of 35 plenary talks plus 48 contributed papers in two afternoons of parallel sessions. In addition to high energy physics computing, a number of papers were given by experts in computing science, in line with the conference's aim – 'to bring together high energy physicists and computer scientists'.
Computing in high energy physics
International Nuclear Information System (INIS)
Smith, Sarah; Devenish, Robin
1989-01-01
Computing in high energy physics has changed over the years from being something one did on a slide-rule, through early computers, then a necessary evil to the position today where computers permeate all aspects of the subject from control of the apparatus to theoretical lattice gauge calculations. The state of the art, as well as new trends and hopes, were reflected in this year's 'Computing In High Energy Physics' conference held in the dreamy setting of Oxford's spires. The conference aimed to give a comprehensive overview, entailing a heavy schedule of 35 plenary talks plus 48 contributed papers in two afternoons of parallel sessions. In addition to high energy physics computing, a number of papers were given by experts in computing science, in line with the conference's aim – 'to bring together high energy physicists and computer scientists'
Kenwright, David
2000-01-01
Aerospace data analysis tools that significantly reduce the time and effort needed to analyze large-scale computational fluid dynamics simulations have emerged this year. The current approach for most postprocessing and visualization work is to explore the 3D flow simulations with one of a dozen or so interactive tools. While effective for analyzing small data sets, this approach becomes extremely time consuming when working with data sets larger than one gigabyte. An active area of research this year has been the development of data mining tools that automatically search through gigabyte data sets and extract the salient features with little or no human intervention. With these so-called feature extraction tools, engineers are spared the tedious task of manually exploring huge amounts of data to find the important flow phenomena. The software tools identify features such as vortex cores, shocks, separation and attachment lines, recirculation bubbles, and boundary layers. Some of these features can be extracted in a few seconds; others take minutes to hours on extremely large data sets. The analysis can be performed off-line in a batch process, either during or following the supercomputer simulations. These computations have to be performed only once, because the feature extraction programs search the entire data set and find every occurrence of the phenomena being sought. Because the important questions about the data are being answered automatically, interactivity is less critical than it is with traditional approaches.
Dixey, Graham
1994-01-01
This book explains how computers interact with the world around them and therefore how to make them a useful tool. Topics covered include descriptions of all the components that make up a computer, principles of data exchange, interaction with peripherals, serial communication, input devices, recording methods, computer-controlled motors, and printers.In an informative and straightforward manner, Graham Dixey describes how to turn what might seem an incomprehensible 'black box' PC into a powerful and enjoyable tool that can help you in all areas of your work and leisure. With plenty of handy
Newman, Mark
2013-01-01
A complete introduction to the field of computational physics, with examples and exercises in the Python programming language. Computers play a central role in virtually every major physics discovery today, from astrophysics and particle physics to biophysics and condensed matter. This book explains the fundamentals of computational physics and describes in simple terms the techniques that every physicist should know, such as finite difference methods, numerical quadrature, and the fast Fourier transform. The book offers a complete introduction to the topic at the undergraduate level, and is also suitable for the advanced student or researcher who wants to learn the foundational elements of this important field.
Energy Technology Data Exchange (ETDEWEB)
Anon.
1987-01-15
Computers have for many years played a vital role in the acquisition and treatment of experimental data, but they have more recently taken up a much more extended role in physics research. The numerical and algebraic calculations now performed on modern computers make it possible to explore consequences of basic theories in a way which goes beyond the limits of both analytic insight and experimental investigation. This was brought out clearly at the Conference on Perspectives in Computational Physics, held at the International Centre for Theoretical Physics, Trieste, Italy, from 29-31 October.
Baun, Christian; Nimis, Jens; Tai, Stefan
2011-01-01
Cloud computing is a buzz-word in today's information technology (IT) that nobody can escape. But what is really behind it? There are many interpretations of this term, but no standardized or even uniform definition. Instead, as a result of the multi-faceted viewpoints and the diverse interests expressed by the various stakeholders, cloud computing is perceived as a rather fuzzy concept. With this book, the authors deliver an overview of cloud computing architecture, services, and applications. Their aim is to bring readers up to date on this technology and thus to provide a common basis for d
Marques, Severino P C
2012-01-01
This text is a guide how to solve problems in which viscoelasticity is present using existing commercial computational codes. The book gives information on codes’ structure and use, data preparation and output interpretation and verification. The first part of the book introduces the reader to the subject, and to provide the models, equations and notation to be used in the computational applications. The second part shows the most important Computational techniques: Finite elements formulation, Boundary elements formulation, and presents the solutions of Viscoelastic problems with Abaqus.
Stroke, G. W.
1972-01-01
Applications of the optical computer include an approach for increasing the sharpness of images obtained from the most powerful electron microscopes and fingerprint/credit card identification. The information-handling capability of the various optical computing processes is very great. Modern synthetic-aperture radars scan upward of 100,000 resolvable elements per second. Fields which have assumed major importance on the basis of optical computing principles are optical image deblurring, coherent side-looking synthetic-aperture radar, and correlative pattern recognition. Some examples of the most dramatic image deblurring results are shown.
International Nuclear Information System (INIS)
Anon.
1987-01-01
Computers have for many years played a vital role in the acquisition and treatment of experimental data, but they have more recently taken up a much more extended role in physics research. The numerical and algebraic calculations now performed on modern computers make it possible to explore consequences of basic theories in a way which goes beyond the limits of both analytic insight and experimental investigation. This was brought out clearly at the Conference on Perspectives in Computational Physics, held at the International Centre for Theoretical Physics, Trieste, Italy, from 29-31 October
DEFF Research Database (Denmark)
Brier, Søren
2014-01-01
Open peer commentary on the article “Info-computational Constructivism and Cognition” by Gordana Dodig-Crnkovic. Upshot: The main problems with info-computationalism are: (1) Its basic concept of natural computing has neither been defined theoretically or implemented practically. (2. It cannot...... encompass human concepts of subjective experience and intersubjective meaningful communication, which prevents it from being genuinely transdisciplinary. (3) Philosophically, it does not sufficiently accept the deep ontological differences between various paradigms such as von Foerster’s second- order...
Xu, Zeshui
2014-01-01
This book provides the readers with a thorough and systematic introduction to hesitant fuzzy theory. It presents the most recent research results and advanced methods in the field. These includes: hesitant fuzzy aggregation techniques, hesitant fuzzy preference relations, hesitant fuzzy measures, hesitant fuzzy clustering algorithms and hesitant fuzzy multi-attribute decision making methods. Since its introduction by Torra and Narukawa in 2009, hesitant fuzzy sets have become more and more popular and have been used for a wide range of applications, from decision-making problems to cluster analysis, from medical diagnosis to personnel appraisal and information retrieval. This book offers a comprehensive report on the state-of-the-art in hesitant fuzzy sets theory and applications, aiming at becoming a reference guide for both researchers and practitioners in the area of fuzzy mathematics and other applied research fields (e.g. operations research, information science, management science and engineering) chara...
DEFF Research Database (Denmark)
Flesch, Benjamin; Vatrapu, Ravi; Mukkamala, Raghava Rao
2015-01-01
approach to computational social science mentioned above. The development of the dashboard involved cutting-edge open source visual analytics libraries (D3.js) and creation of new visualizations such as of actor mobility across time and space, conversational comets, and more. Evaluation of the dashboard......Current state-of-the-art in big social data analytics is largely limited to graph theoretical approaches such as social network analysis (SNA) informed by the social philosophical approach of relational sociology. This paper proposes and illustrates an alternate holistic approach to big social data...
Frame scaling function sets and frame wavelet sets in Rd
International Nuclear Information System (INIS)
Liu Zhanwei; Hu Guoen; Wu Guochang
2009-01-01
In this paper, we classify frame wavelet sets and frame scaling function sets in higher dimensions. Firstly, we obtain a necessary condition for a set to be the frame wavelet sets. Then, we present a necessary and sufficient condition for a set to be a frame scaling function set. We give a property of frame scaling function sets, too. Some corresponding examples are given to prove our theory in each section.
Computer architecture a quantitative approach
Hennessy, John L
2019-01-01
Computer Architecture: A Quantitative Approach, Sixth Edition has been considered essential reading by instructors, students and practitioners of computer design for over 20 years. The sixth edition of this classic textbook is fully revised with the latest developments in processor and system architecture. It now features examples from the RISC-V (RISC Five) instruction set architecture, a modern RISC instruction set developed and designed to be a free and openly adoptable standard. It also includes a new chapter on domain-specific architectures and an updated chapter on warehouse-scale computing that features the first public information on Google's newest WSC. True to its original mission of demystifying computer architecture, this edition continues the longstanding tradition of focusing on areas where the most exciting computing innovation is happening, while always keeping an emphasis on good engineering design.
Quantum computing on encrypted data.
Fisher, K A G; Broadbent, A; Shalm, L K; Yan, Z; Lavoie, J; Prevedel, R; Jennewein, T; Resch, K J
2014-01-01
The ability to perform computations on encrypted data is a powerful tool for protecting privacy. Recently, protocols to achieve this on classical computing systems have been found. Here, we present an efficient solution to the quantum analogue of this problem that enables arbitrary quantum computations to be carried out on encrypted quantum data. We prove that an untrusted server can implement a universal set of quantum gates on encrypted quantum bits (qubits) without learning any information about the inputs, while the client, knowing the decryption key, can easily decrypt the results of the computation. We experimentally demonstrate, using single photons and linear optics, the encryption and decryption scheme on a set of gates sufficient for arbitrary quantum computations. As our protocol requires few extra resources compared with other schemes it can be easily incorporated into the design of future quantum servers. These results will play a key role in enabling the development of secure distributed quantum systems.
AGRIS: Description of computer programs
International Nuclear Information System (INIS)
Schmid, H.; Schallaboeck, G.
1976-01-01
The set of computer programs used at the AGRIS (Agricultural Information System) Input Unit at the IAEA, Vienna, Austria to process the AGRIS computer-readable data is described. The processing flow is illustrated. The configuration of the IAEA's computer, a list of error messages generated by the computer, the EBCDIC code table extended for AGRIS and INIS, the AGRIS-6 bit code, the work sheet format, and job control listings are included as appendixes. The programs are written for an IBM 370, model 145, operating system OS or VS, and require a 130K partition. The programming languages are PL/1 (F-compiler) and Assembler
Computer graphics and research projects
International Nuclear Information System (INIS)
Ingtrakul, P.
1994-01-01
This report was prepared as an account of scientific visualization tools and application tools for scientists and engineers. It is provided a set of tools to create pictures and to interact with them in natural ways. It applied many techniques of computer graphics and computer animation through a number of full-color presentations as computer animated commercials, 3D computer graphics, dynamic and environmental simulations, scientific modeling and visualization, physically based modelling, and beavioral, skelatal, dynamics, and particle animation. It took in depth at original hardware and limitations of existing PC graphics adapters contain syste m performance, especially with graphics intensive application programs and user interfaces
Chandrasekaran, K
2014-01-01
ForewordPrefaceComputing ParadigmsLearning ObjectivesPreambleHigh-Performance ComputingParallel ComputingDistributed ComputingCluster ComputingGrid ComputingCloud ComputingBiocomputingMobile ComputingQuantum ComputingOptical ComputingNanocomputingNetwork ComputingSummaryReview PointsReview QuestionsFurther ReadingCloud Computing FundamentalsLearning ObjectivesPreambleMotivation for Cloud ComputingThe Need for Cloud ComputingDefining Cloud ComputingNIST Definition of Cloud ComputingCloud Computing Is a ServiceCloud Computing Is a Platform5-4-3 Principles of Cloud computingFive Essential Charact
Toong, Hoo-min D.; Gupta, Amar
1982-01-01
Describes the hardware, software, applications, and current proliferation of personal computers (microcomputers). Includes discussions of microprocessors, memory, output (including printers), application programs, the microcomputer industry, and major microcomputer manufacturers (Apple, Radio Shack, Commodore, and IBM). (JN)
DEFF Research Database (Denmark)
Chongtay, Rocio; Robering, Klaus
2016-01-01
In recent years, there has been a growing interest in and recognition of the importance of Computational Literacy, a skill generally considered to be necessary for success in the 21st century. While much research has concentrated on requirements, tools, and teaching methodologies for the acquisit......In recent years, there has been a growing interest in and recognition of the importance of Computational Literacy, a skill generally considered to be necessary for success in the 21st century. While much research has concentrated on requirements, tools, and teaching methodologies...... for the acquisition of Computational Literacy at basic educational levels, focus on higher levels of education has been much less prominent. The present paper considers the case of courses for higher education programs within the Humanities. A model is proposed which conceives of Computational Literacy as a layered...
DEFF Research Database (Denmark)
Nielbo, Kristoffer Laigaard; Braxton, Donald M.; Upal, Afzal
2012-01-01
The computational approach has become an invaluable tool in many fields that are directly relevant to research in religious phenomena. Yet the use of computational tools is almost absent in the study of religion. Given that religion is a cluster of interrelated phenomena and that research...... concerning these phenomena should strive for multilevel analysis, this article argues that the computational approach offers new methodological and theoretical opportunities to the study of religion. We argue that the computational approach offers 1.) an intermediary step between any theoretical construct...... and its targeted empirical space and 2.) a new kind of data which allows the researcher to observe abstract constructs, estimate likely outcomes, and optimize empirical designs. Because sophisticated mulitilevel research is a collaborative project we also seek to introduce to scholars of religion some...
Timmermans, Benjamin; Kuhn, Tobias; Beelen, Kaspar; Aroyo, Lora
2017-01-01
Climate change, vaccination, abortion, Trump: Many topics are surrounded by fierce controversies. The nature of such heated debates and their elements have been studied extensively in the social science literature. More recently, various computational approaches to controversy analysis have
Indian Academy of Sciences (India)
IAS Admin
emergence of supercomputers led to the use of computer simula- tion as an .... Scientific and engineering applications (e.g., Tera grid secure gate way). Collaborative ... Encryption, privacy, protection from malicious software. Physical Layer.
International Nuclear Information System (INIS)
Niedzwiedzki, M.
1982-01-01
Physical foundations and the developments in the transmission and emission computer tomography are presented. On the basis of the available literature and private communications a comparison is made of the various transmission tomographs. A new technique of computer emission tomography ECT, unknown in Poland, is described. The evaluation of two methods of ECT, namely those of positron and single photon emission tomography is made. (author)
Kersting, Kristian; Morik, Katharina
2016-01-01
The book at hand gives an overview of the state of the art research in Computational Sustainability as well as case studies of different application scenarios. This covers topics such as renewable energy supply, energy storage and e-mobility, efficiency in data centers and networks, sustainable food and water supply, sustainable health, industrial production and quality, etc. The book describes computational methods and possible application scenarios.
International Nuclear Information System (INIS)
Yeh, G.P.
2000-01-01
High-energy physics, nuclear physics, space sciences, and many other fields have large challenges in computing. In recent years, PCs have achieved performance comparable to the high-end UNIX workstations, at a small fraction of the price. We review the development and broad applications of commodity PCs as the solution to CPU needs, and look forward to the important and exciting future of large-scale PC computing
An efficient quantum scheme for Private Set Intersection
Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun
2016-01-01
Private Set Intersection allows a client to privately compute set intersection with the collaboration of the server, which is one of the most fundamental and key problems within the multiparty collaborative computation of protecting the privacy of the parties. In this paper, we first present a cheat-sensitive quantum scheme for Private Set Intersection. Compared with classical schemes, our scheme has lower communication complexity, which is independent of the size of the server's set. Therefore, it is very suitable for big data services in Cloud or large-scale client-server networks.
Rational Multiparty Computation
Wallrabenstein, John Ross
2014-01-01
The field of rational cryptography considers the design of cryptographic protocols in the presence of rational agents seeking to maximize local utility functions. This departs from the standard secure multiparty computation setting, where players are assumed to be either honest or malicious. ^ We detail the construction of both a two-party and a multiparty game theoretic framework for constructing rational cryptographic protocols. Our framework specifies the utility function assumptions neces...
Arnold, J. O.
1987-01-01
With the advent of supercomputers, modern computational chemistry algorithms and codes, a powerful tool was created to help fill NASA's continuing need for information on the properties of matter in hostile or unusual environments. Computational resources provided under the National Aerodynamics Simulator (NAS) program were a cornerstone for recent advancements in this field. Properties of gases, materials, and their interactions can be determined from solutions of the governing equations. In the case of gases, for example, radiative transition probabilites per particle, bond-dissociation energies, and rates of simple chemical reactions can be determined computationally as reliably as from experiment. The data are proving to be quite valuable in providing inputs to real-gas flow simulation codes used to compute aerothermodynamic loads on NASA's aeroassist orbital transfer vehicles and a host of problems related to the National Aerospace Plane Program. Although more approximate, similar solutions can be obtained for ensembles of atoms simulating small particles of materials with and without the presence of gases. Computational chemistry has application in studying catalysis, properties of polymers, all of interest to various NASA missions, including those previously mentioned. In addition to discussing these applications of computational chemistry within NASA, the governing equations and the need for supercomputers for their solution is outlined.
Workshop on Computational Optimization
2015-01-01
Our everyday life is unthinkable without optimization. We try to minimize our effort and to maximize the achieved profit. Many real world and industrial problems arising in engineering, economics, medicine and other domains can be formulated as optimization tasks. This volume is a comprehensive collection of extended contributions from the Workshop on Computational Optimization 2013. It presents recent advances in computational optimization. The volume includes important real life problems like parameter settings for controlling processes in bioreactor, resource constrained project scheduling, problems arising in transport services, error correcting codes, optimal system performance and energy consumption and so on. It shows how to develop algorithms for them based on new metaheuristic methods like evolutionary computation, ant colony optimization, constrain programming and others.
Sustainable computational science
DEFF Research Database (Denmark)
Rougier, Nicolas; Hinsen, Konrad; Alexandre, Frédéric
2017-01-01
Computer science offers a large set of tools for prototyping, writing, running, testing, validating, sharing and reproducing results, however computational science lags behind. In the best case, authors may provide their source code as a compressed archive and they may feel confident their research...... workflows, in particular in peer-reviews. Existing journals have been slow to adapt: source codes are rarely requested, hardly ever actually executed to check that they produce the results advertised in the article. ReScience is a peer-reviewed journal that targets computational research and encourages...... the explicit replication of already published research, promoting new and open-source implementations in order to ensure that the original research can be replicated from its description. To achieve this goal, the whole publishing chain is radically different from other traditional scientific journals. ReScience...
Setting Goals for Achievement in Physical Education Settings
Baghurst, Timothy; Tapps, Tyler; Kensinger, Weston
2015-01-01
Goal setting has been shown to improve student performance, motivation, and task completion in academic settings. Although goal setting is utilized by many education professionals to help students set realistic and proper goals, physical educators may not be using goal setting effectively. Without incorporating all three types of goals and…
Towards topological quantum computer
Melnikov, D.; Mironov, A.; Mironov, S.; Morozov, A.; Morozov, An.
2018-01-01
Quantum R-matrices, the entangling deformations of non-entangling (classical) permutations, provide a distinguished basis in the space of unitary evolutions and, consequently, a natural choice for a minimal set of basic operations (universal gates) for quantum computation. Yet they play a special role in group theory, integrable systems and modern theory of non-perturbative calculations in quantum field and string theory. Despite recent developments in those fields the idea of topological quantum computing and use of R-matrices, in particular, practically reduce to reinterpretation of standard sets of quantum gates, and subsequently algorithms, in terms of available topological ones. In this paper we summarize a modern view on quantum R-matrix calculus and propose to look at the R-matrices acting in the space of irreducible representations, which are unitary for the real-valued couplings in Chern-Simons theory, as the fundamental set of universal gates for topological quantum computer. Such an approach calls for a more thorough investigation of the relation between topological invariants of knots and quantum algorithms.
Albash, Tameem; Lidar, Daniel A.
2018-01-01
Adiabatic quantum computing (AQC) started as an approach to solving optimization problems and has evolved into an important universal alternative to the standard circuit model of quantum computing, with deep connections to both classical and quantum complexity theory and condensed matter physics. This review gives an account of the major theoretical developments in the field, while focusing on the closed-system setting. The review is organized around a series of topics that are essential to an understanding of the underlying principles of AQC, its algorithmic accomplishments and limitations, and its scope in the more general setting of computational complexity theory. Several variants are presented of the adiabatic theorem, the cornerstone of AQC, and examples are given of explicit AQC algorithms that exhibit a quantum speedup. An overview of several proofs of the universality of AQC and related Hamiltonian quantum complexity theory is given. Considerable space is devoted to stoquastic AQC, the setting of most AQC work to date, where obstructions to success and their possible resolutions are discussed.
Towards topological quantum computer
Directory of Open Access Journals (Sweden)
D. Melnikov
2018-01-01
Full Text Available Quantum R-matrices, the entangling deformations of non-entangling (classical permutations, provide a distinguished basis in the space of unitary evolutions and, consequently, a natural choice for a minimal set of basic operations (universal gates for quantum computation. Yet they play a special role in group theory, integrable systems and modern theory of non-perturbative calculations in quantum field and string theory. Despite recent developments in those fields the idea of topological quantum computing and use of R-matrices, in particular, practically reduce to reinterpretation of standard sets of quantum gates, and subsequently algorithms, in terms of available topological ones. In this paper we summarize a modern view on quantum R-matrix calculus and propose to look at the R-matrices acting in the space of irreducible representations, which are unitary for the real-valued couplings in Chern–Simons theory, as the fundamental set of universal gates for topological quantum computer. Such an approach calls for a more thorough investigation of the relation between topological invariants of knots and quantum algorithms.
Cubical sets as a classifying topos
DEFF Research Database (Denmark)
Spitters, Bas
Coquand’s cubical set model for homotopy type theory provides the basis for a computational interpretation of the univalence axiom and some higher inductive types, as implemented in the cubical proof assistant. We show that the underlying cube category is the opposite of the Lawvere theory of De...... Morgan algebras. The topos of cubical sets itself classifies the theory of ‘free De Morgan algebras’. This provides us with a topos with an internal ‘interval’. Using this interval we construct a model of type theory following van den Berg and Garner. We are currently investigating the precise relation...
Some numerical studies of interface advection properties of level set ...
Indian Academy of Sciences (India)
explicit computational elements moving through an Eulerian grid. ... location. The interface is implicitly defined (captured) as the location of the discontinuity in the ... This level set function is advected with the background flow field and thus ...
DEFF Research Database (Denmark)
Flesch, Benjamin; Hussain, Abid; Vatrapu, Ravi
2015-01-01
-edge open source visual analytics libraries from D3.js and creation of new visualizations (ac-tor mobility across time, conversational comets etc). Evaluation of the dashboard consisting of technical testing, usability testing, and domain-specific testing with CSR students and yielded positive results.......This paper presents a state-of-the art visual analytics dash-board, Social Set Visualizer (SoSeVi), of approximately 90 million Facebook actions from 11 different companies that have been mentioned in the traditional media in relation to garment factory accidents in Bangladesh. The enterprise...
Hartfiel, Darald J
1998-01-01
In this study extending classical Markov chain theory to handle fluctuating transition matrices, the author develops a theory of Markov set-chains and provides numerous examples showing how that theory can be applied. Chapters are concluded with a discussion of related research. Readers who can benefit from this monograph are those interested in, or involved with, systems whose data is imprecise or that fluctuate with time. A background equivalent to a course in linear algebra and one in probability theory should be sufficient.
Directory of Open Access Journals (Sweden)
López de Mántaras Badia, Ramon
2013-12-01
Full Text Available New technologies, and in particular artificial intelligence, are drastically changing the nature of creative processes. Computers are playing very significant roles in creative activities such as music, architecture, fine arts, and science. Indeed, the computer is already a canvas, a brush, a musical instrument, and so on. However, we believe that we must aim at more ambitious relations between computers and creativity. Rather than just seeing the computer as a tool to help human creators, we could see it as a creative entity in its own right. This view has triggered a new subfield of Artificial Intelligence called Computational Creativity. This article addresses the question of the possibility of achieving computational creativity through some examples of computer programs capable of replicating some aspects of creative behavior in the fields of music and science.Las nuevas tecnologías y en particular la Inteligencia Artificial están cambiando de forma importante la naturaleza del proceso creativo. Los ordenadores están jugando un papel muy significativo en actividades artísticas tales como la música, la arquitectura, las bellas artes y la ciencia. Efectivamente, el ordenador ya es el lienzo, el pincel, el instrumento musical, etc. Sin embargo creemos que debemos aspirar a relaciones más ambiciosas entre los ordenadores y la creatividad. En lugar de verlos solamente como herramientas de ayuda a la creación, los ordenadores podrían ser considerados agentes creativos. Este punto de vista ha dado lugar a un nuevo subcampo de la Inteligencia Artificial denominado Creatividad Computacional. En este artículo abordamos la cuestión de la posibilidad de alcanzar dicha creatividad computacional mediante algunos ejemplos de programas de ordenador capaces de replicar algunos aspectos relacionados con el comportamiento creativo en los ámbitos de la música y la ciencia.
Enhanced delegated computing using coherence
Barz, Stefanie; Dunjko, Vedran; Schlederer, Florian; Moore, Merritt; Kashefi, Elham; Walmsley, Ian A.
2016-03-01
A longstanding question is whether it is possible to delegate computational tasks securely—such that neither the computation nor the data is revealed to the server. Recently, both a classical and a quantum solution to this problem were found [C. Gentry, in Proceedings of the 41st Annual ACM Symposium on the Theory of Computing (Association for Computing Machinery, New York, 2009), pp. 167-178; A. Broadbent, J. Fitzsimons, and E. Kashefi, in Proceedings of the 50th Annual Symposium on Foundations of Computer Science (IEEE Computer Society, Los Alamitos, CA, 2009), pp. 517-526]. Here, we study the first step towards the interplay between classical and quantum approaches and show how coherence can be used as a tool for secure delegated classical computation. We show that a client with limited computational capacity—restricted to an XOR gate—can perform universal classical computation by manipulating information carriers that may occupy superpositions of two states. Using single photonic qubits or coherent light, we experimentally implement secure delegated classical computations between an independent client and a server, which are installed in two different laboratories and separated by 50 m . The server has access to the light sources and measurement devices, whereas the client may use only a restricted set of passive optical devices to manipulate the information-carrying light beams. Thus, our work highlights how minimal quantum and classical resources can be combined and exploited for classical computing.
Efficiently outsourcing multiparty computation under multiple keys
Peter, Andreas; Tews, Erik; Tews, Erik; Katzenbeisser, Stefan
2013-01-01
Secure multiparty computation enables a set of users to evaluate certain functionalities on their respective inputs while keeping these inputs encrypted throughout the computation. In many applications, however, outsourcing these computations to an untrusted server is desirable, so that the server
Computational gestalts and perception thresholds.
Desolneux, Agnès; Moisan, Lionel; Morel, Jean-Michel
2003-01-01
In 1923, Max Wertheimer proposed a research programme and method in visual perception. He conjectured the existence of a small set of geometric grouping laws governing the perceptual synthesis of phenomenal objects, or "gestalt" from the atomic retina input. In this paper, we review this set of geometric grouping laws, using the works of Metzger, Kanizsa and their schools. In continuation, we explain why the Gestalt theory research programme can be translated into a Computer Vision programme. This translation is not straightforward, since Gestalt theory never addressed two fundamental matters: image sampling and image information measurements. Using these advances, we shall show that gestalt grouping laws can be translated into quantitative laws allowing the automatic computation of gestalts in digital images. From the psychophysical viewpoint, a main issue is raised: the computer vision gestalt detection methods deliver predictable perception thresholds. Thus, we are set in a position where we can build artificial images and check whether some kind of agreement can be found between the computationally predicted thresholds and the psychophysical ones. We describe and discuss two preliminary sets of experiments, where we compared the gestalt detection performance of several subjects with the predictable detection curve. In our opinion, the results of this experimental comparison support the idea of a much more systematic interaction between computational predictions in Computer Vision and psychophysical experiments.
Analysis of successive data sets
Spreeuwers, Lieuwe Jan; Breeuwer, Marcel; Haselhoff, Eltjo Hans
2008-01-01
The invention relates to the analysis of successive data sets. A local intensity variation is formed from such successive data sets, that is, from data values in successive data sets at corresponding positions in each of the data sets. A region of interest is localized in the individual data sets on
Analysis of successive data sets
Spreeuwers, Lieuwe Jan; Breeuwer, Marcel; Haselhoff, Eltjo Hans
2002-01-01
The invention relates to the analysis of successive data sets. A local intensity variation is formed from such successive data sets, that is, from data values in successive data sets at corresponding positions in each of the data sets. A region of interest is localized in the individual data sets on