WorldWideScience

Sample records for computationally efficient formal

  1. A formalization of computational trust

    NARCIS (Netherlands)

    Güven - Ozcelebi, C.; Holenderski, M.J.; Ozcelebi, T.; Lukkien, J.J.

    2018-01-01

    Computational trust aims to quantify trust and is studied by many disciplines including computer science, social sciences and business science. We propose a formal computational trust model, including its parameters and operations on these parameters, as well as a step by step guide to compute trust

  2. A computational formalization for partial evaluation

    DEFF Research Database (Denmark)

    Hatcliff, John; Danvy, Olivier

    1997-01-01

    We formalize a partial evaluator for Eugenio Moggi's computational metalanguage. This formalization gives an evaluation-order independent view of binding-time analysis and program specialization, including a proper treatment of call unfolding. It also enables us to express the essence of `control...

  3. Simple formalism for efficient derivatives and multi-determinant expansions in quantum Monte Carlo

    NARCIS (Netherlands)

    Filippi, Claudia; Assaraf, R.; Moroni, S.

    2016-01-01

    We present a simple and general formalism to compute efficiently the derivatives of a multi-determinant Jastrow-Slater wave function, the local energy, the interatomic forces, and similar quantities needed in quantum Monte Carlo. Through a straightforward manipulation of matrices evaluated on the

  4. Simple formalism for efficient derivatives and multi-determinant expansions in quantum Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Filippi, Claudia, E-mail: c.filippi@utwente.nl [MESA+ Institute for Nanotechnology, University of Twente, P.O. Box 217, 7500 AE Enschede (Netherlands); Assaraf, Roland, E-mail: assaraf@lct.jussieu.fr [Sorbonne Universités, UPMC Univ Paris 06, CNRS, Laboratoire de Chimie Théorique CC 137-4, place Jussieu F-75252 Paris Cedex 05 (France); Moroni, Saverio, E-mail: moroni@democritos.it [CNR-IOM DEMOCRITOS, Istituto Officina dei Materiali, and SISSA Scuola Internazionale Superiore di Studi Avanzati, Via Bonomea 265, I-34136 Trieste (Italy)

    2016-05-21

    We present a simple and general formalism to compute efficiently the derivatives of a multi-determinant Jastrow-Slater wave function, the local energy, the interatomic forces, and similar quantities needed in quantum Monte Carlo. Through a straightforward manipulation of matrices evaluated on the occupied and virtual orbitals, we obtain an efficiency equivalent to algorithmic differentiation in the computation of the interatomic forces and the optimization of the orbital parameters. Furthermore, for a large multi-determinant expansion, the significant computational gain afforded by a recently introduced table method is here extended to the local value of any one-body operator and to its derivatives, in both all-electron and pseudopotential calculations.

  5. Sound Computational Interpretation of Formal Encryption with Composed Keys

    NARCIS (Netherlands)

    Laud, P.; Corin, R.J.; In Lim, J.; Hoon Lee, D.

    2003-01-01

    The formal and computational views of cryptography have been related by the seminal work of Abadi and Rogaway. In their work, a formal treatment of encryption that uses atomic keys is justified in the computational world. However, many proposed formal approaches allow the use of composed keys, where

  6. A computational formalization for partial evaluation

    DEFF Research Database (Denmark)

    Hatcliff, John; Danvy, Olivier

    1996-01-01

    We formalize a partial evaluator for Eugenio Moggi's computational metalanguage. This formalization gives an evaluation-order independent view of binding-time analysis and program specialization, including a proper treatment of call unfolding. It also enables us to express the essence of `control......-based binding-time improvements' for let expressions. Specically, we prove that the binding-time improvements given by `continuation-based specialization' can be expressed in the metalanguage via monadic laws....

  7. A Synthesized Framework for Formal Verification of Computing Systems

    Directory of Open Access Journals (Sweden)

    Nikola Bogunovic

    2003-12-01

    Full Text Available Design process of computing systems gradually evolved to a level that encompasses formal verification techniques. However, the integration of formal verification techniques into a methodical design procedure has many inherent miscomprehensions and problems. The paper explicates the discrepancy between the real system implementation and the abstracted model that is actually used in the formal verification procedure. Particular attention is paid to the seamless integration of all phases of the verification procedure that encompasses definition of the specification language and denotation and execution of conformance relation between the abstracted model and its intended behavior. The concealed obstacles are exposed, computationally expensive steps identified and possible improvements proposed.

  8. The development of mobile computation and the related formal description

    International Nuclear Information System (INIS)

    Jin Yan; Yang Xiaozong

    2003-01-01

    The description and research for formal representation in mobile computation, which is very instructive to resolve the status transmission, domain administration, authentication. This paper presents the descriptive communicating process and computational process from the view of formal calculus, what's more, it construct a practical application used by mobile ambient. Finally, this dissertation shows the future work and direction. (authors)

  9. Functional Automata - Formal Languages for Computer Science Students

    Directory of Open Access Journals (Sweden)

    Marco T. Morazán

    2014-12-01

    Full Text Available An introductory formal languages course exposes advanced undergraduate and early graduate students to automata theory, grammars, constructive proofs, computability, and decidability. Programming students find these topics to be challenging or, in many cases, overwhelming and on the fringe of Computer Science. The existence of this perception is not completely absurd since students are asked to design and prove correct machines and grammars without being able to experiment nor get immediate feedback, which is essential in a learning context. This article puts forth the thesis that the theory of computation ought to be taught using tools for actually building computations. It describes the implementation and the classroom use of a library, FSM, designed to provide students with the opportunity to experiment and test their designs using state machines, grammars, and regular expressions. Students are able to perform random testing before proceeding with a formal proof of correctness. That is, students can test their designs much like they do in a programming course. In addition, the library easily allows students to implement the algorithms they develop as part of the constructive proofs they write. Providing students with this ability ought to be a new trend in the formal languages classroom.

  10. Efficiency improvement in proton dose calculations with an equivalent restricted stopping power formalism

    Science.gov (United States)

    Maneval, Daniel; Bouchard, Hugo; Ozell, Benoît; Després, Philippe

    2018-01-01

    The equivalent restricted stopping power formalism is introduced for proton mean energy loss calculations under the continuous slowing down approximation. The objective is the acceleration of Monte Carlo dose calculations by allowing larger steps while preserving accuracy. The fractional energy loss per step length ɛ was obtained with a secant method and a Gauss-Kronrod quadrature estimation of the integral equation relating the mean energy loss to the step length. The midpoint rule of the Newton-Cotes formulae was then used to solve this equation, allowing the creation of a lookup table linking ɛ to the equivalent restricted stopping power L eq, used here as a key physical quantity. The mean energy loss for any step length was simply defined as the product of the step length with L eq. Proton inelastic collisions with electrons were added to GPUMCD, a GPU-based Monte Carlo dose calculation code. The proton continuous slowing-down was modelled with the L eq formalism. GPUMCD was compared to Geant4 in a validation study where ionization processes alone were activated and a voxelized geometry was used. The energy straggling was first switched off to validate the L eq formalism alone. Dose differences between Geant4 and GPUMCD were smaller than 0.31% for the L eq formalism. The mean error and the standard deviation were below 0.035% and 0.038% respectively. 99.4 to 100% of GPUMCD dose points were consistent with a 0.3% dose tolerance. GPUMCD 80% falloff positions (R80 ) matched Geant’s R80 within 1 μm. With the energy straggling, dose differences were below 2.7% in the Bragg peak falloff and smaller than 0.83% elsewhere. The R80 positions matched within 100 μm. The overall computation times to transport one million protons with GPUMCD were 31-173 ms. Under similar conditions, Geant4 computation times were 1.4-20 h. The L eq formalism led to an intrinsic efficiency gain factor ranging between 30-630, increasing with the prescribed accuracy of simulations. The

  11. Towards a Formal Framework for Computational Trust

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Krukow, Karl Kristian; Sassone, Vladimiro

    2006-01-01

    We define a mathematical measure for the quantitative comparison of probabilistic computational trust systems, and use it to compare a well-known class of algorithms based on the so-called beta model. The main novelty is that our approach is formal, rather than based on experimental simulation....

  12. Efficient and Flexible Computation of Many-Electron Wave Function Overlaps.

    Science.gov (United States)

    Plasser, Felix; Ruckenbauer, Matthias; Mai, Sebastian; Oppel, Markus; Marquetand, Philipp; González, Leticia

    2016-03-08

    A new algorithm for the computation of the overlap between many-electron wave functions is described. This algorithm allows for the extensive use of recurring intermediates and thus provides high computational efficiency. Because of the general formalism employed, overlaps can be computed for varying wave function types, molecular orbitals, basis sets, and molecular geometries. This paves the way for efficiently computing nonadiabatic interaction terms for dynamics simulations. In addition, other application areas can be envisaged, such as the comparison of wave functions constructed at different levels of theory. Aside from explaining the algorithm and evaluating the performance, a detailed analysis of the numerical stability of wave function overlaps is carried out, and strategies for overcoming potential severe pitfalls due to displaced atoms and truncated wave functions are presented.

  13. Criteria of benchmark selection for efficient flexible multibody system formalisms

    Directory of Open Access Journals (Sweden)

    Valášek M.

    2007-10-01

    Full Text Available The paper deals with the selection process of benchmarks for testing and comparing efficient flexible multibody formalisms. The existing benchmarks are briefly summarized. The purposes for benchmark selection are investigated. The result of this analysis is the formulation of the criteria of benchmark selection for flexible multibody formalisms. Based on them the initial set of suitable benchmarks is described. Besides that the evaluation measures are revised and extended.

  14. Combining Archetypes, Ontologies and Formalization Enables Automated Computation of Quality Indicators.

    Science.gov (United States)

    Legaz-García, María Del Carmen; Dentler, Kathrin; Fernández-Breis, Jesualdo Tomás; Cornet, Ronald

    2017-01-01

    ArchMS is a framework that represents clinical information and knowledge using ontologies in OWL, which facilitates semantic interoperability and thereby the exploitation and secondary use of clinical data. However, it does not yet support the automated assessment of quality of care. CLIF is a stepwise method to formalize quality indicators. The method has been implemented in the CLIF tool which supports its users in generating computable queries based on a patient data model which can be based on archetypes. To enable the automated computation of quality indicators using ontologies and archetypes, we tested whether ArchMS and the CLIF tool can be integrated. We successfully automated the process of generating SPARQL queries from quality indicators that have been formalized with CLIF and integrated them into ArchMS. Hence, ontologies and archetypes can be combined for the execution of formalized quality indicators.

  15. Formal Analysis of Dynamics Within Philosophy of Mind by Computer Simulation

    NARCIS (Netherlands)

    Bosse, T.; Schut, M.C.; Treur, J.

    2009-01-01

    Computer simulations can be useful tools to support philosophers in validating their theories, especially when these theories concern phenomena showing nontrivial dynamics. Such theories are usually informal, whilst for computer simulation a formally described model is needed. In this paper, a

  16. The Impact of Formal Education on Computer Literacy

    Directory of Open Access Journals (Sweden)

    Melita Milić

    2010-03-01

    Full Text Available In this paper we will present the survey conducted on 149 students in Croatia. This research includes eight grade students of elementary schools, fourth-grade students of secondary schools and second year university students, as the relevant age groups. Given that these age groups through the course of their schooling to some extent pass some form of IT training, we wanted to explore how it affected their overcoming of computer literacy. The main goal was to investigate how and how much formal education affect the knowledge of computer literacy in diffrent age groups.

  17. Turchin's Relation for Call-by-Name Computations: A Formal Approach

    Directory of Open Access Journals (Sweden)

    Antonina Nepeivoda

    2016-07-01

    Full Text Available Supercompilation is a program transformation technique that was first described by V. F. Turchin in the 1970s. In supercompilation, Turchin's relation as a similarity relation on call-stack configurations is used both for call-by-value and call-by-name semantics to terminate unfolding of the program being transformed. In this paper, we give a formal grammar model of call-by-name stack behaviour. We classify the model in terms of the Chomsky hierarchy and then formally prove that Turchin's relation can terminate all computations generated by the model.

  18. Industrial applications of formal methods to model, design and analyze computer systems

    CERN Document Server

    Craigen, Dan

    1995-01-01

    Formal methods are mathematically-based techniques, often supported by reasoning tools, that can offer a rigorous and effective way to model, design and analyze computer systems. The purpose of this study is to evaluate international industrial experience in using formal methods. The cases selected are representative of industrial-grade projects and span a variety of application domains. The study had three main objectives: · To better inform deliberations within industry and government on standards and regulations; · To provide an authoritative record on the practical experience of formal m

  19. Efficient Techniques for Formal Verification of PowerPC 750 Executables, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We will develop an efficient tool for formal verification of PowerPC 750 executables. The PowerPC 750 architecture is used in the radiation-hardened RAD750...

  20. Formal verification of industrial control systems

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Verification of critical software is a high priority but a challenging task for industrial control systems. For many kinds of problems, testing is not an efficient method. Formal methods, such as model checking appears to be an appropriate complementary method. However, it is not common to use model checking in industry yet, as this method needs typically formal methods expertise and huge computing power. In the EN-ICE-PLC section, we are working on a [methodology][1] and a tool ([PLCverif][2]) to overcome these challenges and to integrate formal verification in the development process of our PLC-based control systems. [1]: http://cern.ch/project-plc-formalmethods [2]: http://cern.ch/plcverif

  1. A formalized design process for bacterial consortia that perform logic computing.

    Directory of Open Access Journals (Sweden)

    Weiyue Ji

    Full Text Available The concept of microbial consortia is of great attractiveness in synthetic biology. Despite of all its benefits, however, there are still problems remaining for large-scaled multicellular gene circuits, for example, how to reliably design and distribute the circuits in microbial consortia with limited number of well-behaved genetic modules and wiring quorum-sensing molecules. To manage such problem, here we propose a formalized design process: (i determine the basic logic units (AND, OR and NOT gates based on mathematical and biological considerations; (ii establish rules to search and distribute simplest logic design; (iii assemble assigned basic logic units in each logic operating cell; and (iv fine-tune the circuiting interface between logic operators. We in silico analyzed gene circuits with inputs ranging from two to four, comparing our method with the pre-existing ones. Results showed that this formalized design process is more feasible concerning numbers of cells required. Furthermore, as a proof of principle, an Escherichia coli consortium that performs XOR function, a typical complex computing operation, was designed. The construction and characterization of logic operators is independent of "wiring" and provides predictive information for fine-tuning. This formalized design process provides guidance for the design of microbial consortia that perform distributed biological computation.

  2. Efficient formalism for treating tapered structures using the Fourier modal method

    DEFF Research Database (Denmark)

    Østerkryger, Andreas Dyhl; Gregersen, Niels

    2016-01-01

    We investigate the development of the mode occupations in tapered structures using the Fourier modal method. In order to use the Fourier modal method, tapered structures are divided into layers of uniform refractive index in the propagation direction and the optical modes are found within each...... layer. This is not very efficient and in this proceeding we take the first steps towards a more efficient formalism for treating tapered structures using the Fourier modal method. We show that the coupling coefficients through the structure are slowly varying and that only the first few modes...

  3. Slavnov-Taylor1.0: A Mathematica package for computation in BRST formalism

    CERN Document Server

    Picariello, Marco; Picariello, Marco; Torrente-Lujan, Emilio

    2004-01-01

    Slavnov-Taylor1.0 is a Mathematica package which allows us to perform automatic simbolic computation in BRST formalism. This article serves as a self-contained guide to prospective users, and indicates the conventions and approximations used.

  4. Formal methods for dynamical systems : 13th International School on Formal Methods for the Design of Computer, Communication, and Software Systems, SFM 2013, Bertinoro, Italy, June 17-22, 2013 : advanced lectures

    NARCIS (Netherlands)

    Bernardo, M.; Vink, de E.P.; Di Pierro, A.; Wiklicky, H.

    2013-01-01

    Preface. This volume presents a set of papers accompanying the lectures of the 13th International School on Formal Methods for the Design of Computer, Communication, and Software Systems (SFM). This series of schools addresses the use of formal methods in computer science as a prominent approach to

  5. Formal verification - Robust and efficient code: Introduction to Formal Verification

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    In general, FV means "proving that certain properties hold for a given system using formal mathematics". This definition can certainly feel daunting, however, as we will learn, we can reap benefits from the paradigm without digging too deep into ...

  6. Dendritic nonlinearities are tuned for efficient spike-based computations in cortical circuits.

    Science.gov (United States)

    Ujfalussy, Balázs B; Makara, Judit K; Branco, Tiago; Lengyel, Máté

    2015-12-24

    Cortical neurons integrate thousands of synaptic inputs in their dendrites in highly nonlinear ways. It is unknown how these dendritic nonlinearities in individual cells contribute to computations at the level of neural circuits. Here, we show that dendritic nonlinearities are critical for the efficient integration of synaptic inputs in circuits performing analog computations with spiking neurons. We developed a theory that formalizes how a neuron's dendritic nonlinearity that is optimal for integrating synaptic inputs depends on the statistics of its presynaptic activity patterns. Based on their in vivo preynaptic population statistics (firing rates, membrane potential fluctuations, and correlations due to ensemble dynamics), our theory accurately predicted the responses of two different types of cortical pyramidal cells to patterned stimulation by two-photon glutamate uncaging. These results reveal a new computational principle underlying dendritic integration in cortical neurons by suggesting a functional link between cellular and systems--level properties of cortical circuits.

  7. Formalizing Informal Logic

    Directory of Open Access Journals (Sweden)

    Douglas Walton

    2015-12-01

    Full Text Available This paper presents a formalization of informal logic using the Carneades Argumentation System (CAS, a formal, computational model of argument that consists of a formal model of argument graphs and audiences. Conflicts between pro and con arguments are resolved using proof standards, such as preponderance of the evidence. CAS also formalizes argumentation schemes. Schemes can be used to check whether a given argument instantiates the types of argument deemed normatively appropriate for the type of dialogue.

  8. DEMONIC programming: a computational language for single-particle equilibrium thermodynamics, and its formal semantics.

    Directory of Open Access Journals (Sweden)

    Samson Abramsky

    2015-11-01

    Full Text Available Maxwell's Demon, 'a being whose faculties are so sharpened that he can follow every molecule in its course', has been the centre of much debate about its abilities to violate the second law of thermodynamics. Landauer's hypothesis, that the Demon must erase its memory and incur a thermodynamic cost, has become the standard response to Maxwell's dilemma, and its implications for the thermodynamics of computation reach into many areas of quantum and classical computing. It remains, however, still a hypothesis. Debate has often centred around simple toy models of a single particle in a box. Despite their simplicity, the ability of these systems to accurately represent thermodynamics (specifically to satisfy the second law and whether or not they display Landauer Erasure, has been a matter of ongoing argument. The recent Norton-Ladyman controversy is one such example. In this paper we introduce a programming language to describe these simple thermodynamic processes, and give a formal operational semantics and program logic as a basis for formal reasoning about thermodynamic systems. We formalise the basic single-particle operations as statements in the language, and then show that the second law must be satisfied by any composition of these basic operations. This is done by finding a computational invariant of the system. We show, furthermore, that this invariant requires an erasure cost to exist within the system, equal to kTln2 for a bit of information: Landauer Erasure becomes a theorem of the formal system. The Norton-Ladyman controversy can therefore be resolved in a rigorous fashion, and moreover the formalism we introduce gives a set of reasoning tools for further analysis of Landauer erasure, which are provably consistent with the second law of thermodynamics.

  9. Efficient computation of hashes

    International Nuclear Information System (INIS)

    Lopes, Raul H C; Franqueira, Virginia N L; Hobson, Peter R

    2014-01-01

    The sequential computation of hashes at the core of many distributed storage systems and found, for example, in grid services can hinder efficiency in service quality and even pose security challenges that can only be addressed by the use of parallel hash tree modes. The main contributions of this paper are, first, the identification of several efficiency and security challenges posed by the use of sequential hash computation based on the Merkle-Damgard engine. In addition, alternatives for the parallel computation of hash trees are discussed, and a prototype for a new parallel implementation of the Keccak function, the SHA-3 winner, is introduced.

  10. 40 years of formal methods

    DEFF Research Database (Denmark)

    Bjørner, Dines; Havelund, Klaus

    2014-01-01

    In this "40 years of formal methods" essay we shall first delineate, Sect. 1, what we mean by method, formal method, computer science, computing science, software engineering, and model-oriented and algebraic methods. Based on this, we shall characterize a spectrum from specification-oriented met...

  11. Efficient computation of argumentation semantics

    CERN Document Server

    Liao, Beishui

    2013-01-01

    Efficient Computation of Argumentation Semantics addresses argumentation semantics and systems, introducing readers to cutting-edge decomposition methods that drive increasingly efficient logic computation in AI and intelligent systems. Such complex and distributed systems are increasingly used in the automation and transportation systems field, and particularly autonomous systems, as well as more generic intelligent computation research. The Series in Intelligent Systems publishes titles that cover state-of-the-art knowledge and the latest advances in research and development in intelligen

  12. Synthesis of Efficient Structures for Concurrent Computation.

    Science.gov (United States)

    1983-10-01

    formal presentation of these techniques, called virtualisation and aggregation, can be found n [King-83$. 113.2 Census Functions Trees perform broadcast... Functions .. .. .. .. ... .... ... ... .... ... ... ....... 6 4 User-Assisted Aggregation .. .. .. .. ... ... ... .... ... .. .......... 6 5 Parallel...6. Simple Parallel Structure for Broadcasting .. .. .. .. .. . ... .. . .. . .... 4 Figure 7. Internal Structure of a Prefix Computation Network

  13. Formal modeling of a system of chemical reactions under uncertainty.

    Science.gov (United States)

    Ghosh, Krishnendu; Schlipf, John

    2014-10-01

    We describe a novel formalism representing a system of chemical reactions, with imprecise rates of reactions and concentrations of chemicals, and describe a model reduction method, pruning, based on the chemical properties. We present two algorithms, midpoint approximation and interval approximation, for construction of efficient model abstractions with uncertainty in data. We evaluate computational feasibility by posing queries in computation tree logic (CTL) on a prototype of extracellular-signal-regulated kinase (ERK) pathway.

  14. The Status of Interactivity in Computer Art: Formal Apories

    Directory of Open Access Journals (Sweden)

    João Castro Pinto

    2011-12-01

    Full Text Available Contemporary art, particularly that which is produced by computer technologies capable of receiving data input via interactive devices (sensors and controllers, constitutes an emerging expressive medium of interdisciplinary nature, which implies the need for a critical look at its constitution and artistic functions. To consider interactive art as a form of artistic expression that files under the present categorization, implies the acceptance of the participation of the spectator in the production of the work of art, supposedly at the time of its origin / or during its creation. When we examine the significance of the formal status of interactivity, assuming as a theoretical starting point the referred premises and reducing it to a phenomenological point of view of artistic creation, we quickly fall into difficulties of conceptual definitions and structural apories [1]. The fundamental aim of this research is to formally define the status of interactive art, by perpetrating a phenomenological examination on the creative process of this specific art, establishing crucial distinctions in order to develop a hermeneutics in favor of creation of new perspectives and aesthetic frameworks. What is interactive creation? Is interactivity, from the computing artistic creativity point of view, the exponentiation of the concept of the open work of art (ECO 2009? Does interactive art correspond to an a priori projective and unachievable meta-art? What is the status of the artist and of the spectator in relation to an interactive work of art? What ontic and factical conditions are postulated as necessary in order to determine an artistic product as co-created? What apories do we find along the progres- sive process of reaching to a clarifying conceptual definition?This brief investigation will seek to contribute to the study of this issue, intending ultimately, and above all, to expose pertinent lines of inquiry rather than to provide definite scientific and

  15. Formal definition of coherency and computation of minimal cut sequences for binary dynamic and repairable systems

    International Nuclear Information System (INIS)

    Chaux, Pierre-Yves

    2013-01-01

    Preventive risk assessment of a complex system rely on a dynamic models which describe the link between the system failure and the scenarios of failure and repair events from its components. The qualitative analyses of a binary dynamic and repairable system is aiming at computing and analyse the scenarios that lead to the system failure. Since such systems describe a large set of those, only the most representative ones, called Minimal Cut Sequences (MCS), are of interest for the safety engineer. The lack of a formal definition for the MCS has generated multiple definitions either specific to a given model (and thus not generic) or informal. This work proposes i) a formal framework and definition for the MCS while staying independent of the reliability model used, ii) the methodology to compute them using property extracted from their formal definition, iii) an extension of the formal framework for multi-states components in order to perform the qualitative analyses of Boolean logic Driven Markov Processes (BDMP) models. Under the hypothesis that the scenarios implicitly described by any reliability model can always be represented by a finite automaton, this work is defining the coherency for dynamic and repairable systems as the way to give a minimal representation of all scenarios that are leading to the system failure. (author)

  16. δ M formalism and anisotropic chaotic inflation power spectrum

    Science.gov (United States)

    Talebian-Ashkezari, A.; Ahmadi, N.

    2018-05-01

    A new analytical approach to linear perturbations in anisotropic inflation has been introduced in [A. Talebian-Ashkezari, N. Ahmadi and A.A. Abolhasani, JCAP 03 (2018) 001] under the name of δ M formalism. In this paper we apply the mentioned approach to a model of anisotropic inflation driven by a scalar field, coupled to the kinetic term of a vector field with a U(1) symmetry. The δ M formalism provides an efficient way of computing tensor-tensor, tensor-scalar as well as scalar-scalar 2-point correlations that are needed for the analysis of the observational features of an anisotropic model on the CMB. A comparison between δ M results and the tedious calculations using in-in formalism shows the aptitude of the δ M formalism in calculating accurate two point correlation functions between physical modes of the system.

  17. Propagator formalism and computer simulation of restricted diffusion behaviors of inter-molecular multiple-quantum coherences

    International Nuclear Information System (INIS)

    Cai Congbo; Chen Zhong; Cai Shuhui; Zhong Jianhui

    2005-01-01

    In this paper, behaviors of single-quantum coherences and inter-molecular multiple-quantum coherences under restricted diffusion in nuclear magnetic resonance experiments were investigated. The propagator formalism based on the loss of spin phase memory during random motion was applied to describe the diffusion-induced signal attenuation. The exact expression of the signal attenuation under the short gradient pulse approximation for restricted diffusion between two parallel plates was obtained using this propagator method. For long gradient pulses, a modified formalism was proposed. The simulated signal attenuation under the effects of gradient pulses of different width based on the Monte Carlo method agrees with the theoretical predictions. The propagator formalism and computer simulation can provide convenient, intuitive and precise methods for the study of the diffusion behaviors

  18. Power-efficient computer architectures recent advances

    CERN Document Server

    Själander, Magnus; Kaxiras, Stefanos

    2014-01-01

    As Moore's Law and Dennard scaling trends have slowed, the challenges of building high-performance computer architectures while maintaining acceptable power efficiency levels have heightened. Over the past ten years, architecture techniques for power efficiency have shifted from primarily focusing on module-level efficiencies, toward more holistic design styles based on parallelism and heterogeneity. This work highlights and synthesizes recent techniques and trends in power-efficient computer architecture.Table of Contents: Introduction / Voltage and Frequency Management / Heterogeneity and Sp

  19. A primer on the energy efficiency of computing

    Energy Technology Data Exchange (ETDEWEB)

    Koomey, Jonathan G. [Research Fellow, Steyer-Taylor Center for Energy Policy and Finance, Stanford University (United States)

    2015-03-30

    The efficiency of computing at peak output has increased rapidly since the dawn of the computer age. This paper summarizes some of the key factors affecting the efficiency of computing in all usage modes. While there is still great potential for improving the efficiency of computing devices, we will need to alter how we do computing in the next few decades because we are finally approaching the limits of current technologies.

  20. GATE: Improving the computational efficiency

    International Nuclear Information System (INIS)

    Staelens, S.; De Beenhouwer, J.; Kruecker, D.; Maigne, L.; Rannou, F.; Ferrer, L.; D'Asseler, Y.; Buvat, I.; Lemahieu, I.

    2006-01-01

    GATE is a software dedicated to Monte Carlo simulations in Single Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET). An important disadvantage of those simulations is the fundamental burden of computation time. This manuscript describes three different techniques in order to improve the efficiency of those simulations. Firstly, the implementation of variance reduction techniques (VRTs), more specifically the incorporation of geometrical importance sampling, is discussed. After this, the newly designed cluster version of the GATE software is described. The experiments have shown that GATE simulations scale very well on a cluster of homogeneous computers. Finally, an elaboration on the deployment of GATE on the Enabling Grids for E-Science in Europe (EGEE) grid will conclude the description of efficiency enhancement efforts. The three aforementioned methods improve the efficiency of GATE to a large extent and make realistic patient-specific overnight Monte Carlo simulations achievable

  1. On the Equivalence of Formal Grammars and Machines.

    Science.gov (United States)

    Lund, Bruce

    1991-01-01

    Explores concepts of formal language and automata theory underlying computational linguistics. A computational formalism is described known as a "logic grammar," with which computational systems process linguistic data, with examples in declarative and procedural semantics and definite clause grammars. (13 references) (CB)

  2. A formal approach to the analysis of clinical computer-interpretable guideline modeling languages.

    Science.gov (United States)

    Grando, M Adela; Glasspool, David; Fox, John

    2012-01-01

    To develop proof strategies to formally study the expressiveness of workflow-based languages, and to investigate their applicability to clinical computer-interpretable guideline (CIG) modeling languages. We propose two strategies for studying the expressiveness of workflow-based languages based on a standard set of workflow patterns expressed as Petri nets (PNs) and notions of congruence and bisimilarity from process calculus. Proof that a PN-based pattern P can be expressed in a language L can be carried out semi-automatically. Proof that a language L cannot provide the behavior specified by a PNP requires proof by exhaustion based on analysis of cases and cannot be performed automatically. The proof strategies are generic but we exemplify their use with a particular CIG modeling language, PROforma. To illustrate the method we evaluate the expressiveness of PROforma against three standard workflow patterns and compare our results with a previous similar but informal comparison. We show that the two proof strategies are effective in evaluating a CIG modeling language against standard workflow patterns. We find that using the proposed formal techniques we obtain different results to a comparable previously published but less formal study. We discuss the utility of these analyses as the basis for principled extensions to CIG modeling languages. Additionally we explain how the same proof strategies can be reused to prove the satisfaction of patterns expressed in the declarative language CIGDec. The proof strategies we propose are useful tools for analysing the expressiveness of CIG modeling languages. This study provides good evidence of the benefits of applying formal methods of proof over semi-formal ones. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Picture languages formal models for picture recognition

    CERN Document Server

    Rosenfeld, Azriel

    1979-01-01

    Computer Science and Applied Mathematics: Picture Languages: Formal Models for Picture Recognition treats pictorial pattern recognition from the formal standpoint of automata theory. This book emphasizes the capabilities and relative efficiencies of two types of automata-array automata and cellular array automata, with respect to various array recognition tasks. The array automata are simple processors that perform sequences of operations on arrays, while the cellular array automata are arrays of processors that operate on pictures in a highly parallel fashion, one processor per picture element. This compilation also reviews a collection of results on two-dimensional sequential and parallel array acceptors. Some of the analogous one-dimensional results and array grammars and their relation to acceptors are likewise covered in this text. This publication is suitable for researchers, professionals, and specialists interested in pattern recognition and automata theory.

  4. An approach of requirements tracing in formal refinement

    DEFF Research Database (Denmark)

    Jastram, Michael; Hallerstede, Stefan; Leuschel, Michael

    2010-01-01

    Formal modeling of computing systems yields models that are intended to be correct with respect to the requirements that have been formalized. The complexity of typical computing systems can be addressed by formal refinement introducing all the necessary details piecemeal. We report on preliminar...... changes, making use of corresponding techniques already built into the Event-B method....

  5. Balancing creativity and time efficiency in multi-team R&D projects: The alignment of formal and informal networks

    DEFF Research Database (Denmark)

    Kratzer, Jan; Gemuenden, Hans Georg; Lettl, Christopher

    2008-01-01

    and their effect on the challenge to balance project creativity and time efficiency. In order to analyse this issue data in two multi-team R&D projects in space industry are collected. There are two intriguing findings that are partly contradicting the state-of-the art knowledge. First, formally ascribed design...... with the team's creativity, whereas it negatively impacts the team's time efficiency....

  6. Efficient Resource Management in Cloud Computing

    OpenAIRE

    Rushikesh Shingade; Amit Patil; Shivam Suryawanshi; M. Venkatesan

    2015-01-01

    Cloud computing, one of the widely used technology to provide cloud services for users who are charged for receiving services. In the aspect of a maximum number of resources, evaluating the performance of Cloud resource management policies are difficult to optimize efficiently. There are different simulation toolkits available for simulation and modelling the Cloud computing environment like GridSim CloudAnalyst, CloudSim, GreenCloud, CloudAuction etc. In proposed Efficient Resource Manage...

  7. Maintaining formal models of living guidelines efficiently

    NARCIS (Netherlands)

    Seyfang, Andreas; Martínez-Salvador, Begoña; Serban, Radu; Wittenberg, Jolanda; Miksch, Silvia; Marcos, Mar; Ten Teije, Annette; Rosenbrand, Kitty C J G M

    2007-01-01

    Translating clinical guidelines into formal models is beneficial in many ways, but expensive. The progress in medical knowledge requires clinical guidelines to be updated at relatively short intervals, leading to the term living guideline. This causes potentially expensive, frequent updates of the

  8. A Formalization of Linkage Analysis

    DEFF Research Database (Denmark)

    Ingolfsdottir, Anna; Christensen, A.I.; Hansen, Jens A.

    In this report a formalization of genetic linkage analysis is introduced. Linkage analysis is a computationally hard biomathematical method, which purpose is to locate genes on the human genome. It is rooted in the new area of bioinformatics and no formalization of the method has previously been ...

  9. Computation of the efficiency distribution of a multichannel focusing collimator

    International Nuclear Information System (INIS)

    Balasubramanian, A.; Venkateswaran, T.V.

    1977-01-01

    This article describes two computer methods of calculating the point source efficiency distribution functions of a focusing collimator with round tapered holes. The first method which computes only the geometric efficiency distribution is adequate for low energy collimators while the second method which computes both geometric and penetration efficiencies can be made use of for medium and high energy collimators. The scatter contribution to the efficiency is not taken into account. In the first method the efficiency distribution of a single cone of the collimator is obtained and the data are used for computing the distribution of the whole collimator. For high energy collimator the entire detector region is imagined to be divided into elemental areas. Efficiency of the elemental area is computed after suitably weighting for the penetration within the collimator septa, which is determined by three dimensional geometric techniques. The method of computing the line source efficiency distribution from point source distribution is also explained. The formulations have been tested by computing the efficiency distribution of several commercial collimators and collimators fabricated by us. (Auth.)

  10. Efficient Multi-Party Computation over Rings

    DEFF Research Database (Denmark)

    Cramer, Ronald; Fehr, Serge; Ishai, Yuval

    2003-01-01

    Secure multi-party computation (MPC) is an active research area, and a wide range of literature can be found nowadays suggesting improvements and generalizations of existing protocols in various directions. However, all current techniques for secure MPC apply to functions that are represented by ...... the usefulness of the above results by presenting a novel application of MPC over (non-field) rings to the round-efficient secure computation of the maximum function. Basic Research in Computer Science (www.brics.dk), funded by the Danish National Research Foundation.......Secure multi-party computation (MPC) is an active research area, and a wide range of literature can be found nowadays suggesting improvements and generalizations of existing protocols in various directions. However, all current techniques for secure MPC apply to functions that are represented...... by (boolean or arithmetic) circuits over finite fields. We are motivated by two limitations of these techniques: – Generality. Existing protocols do not apply to computation over more general algebraic structures (except via a brute-force simulation of computation in these structures). – Efficiency. The best...

  11. Leibniz' First Formalization of Syllogistics

    DEFF Research Database (Denmark)

    Robering, Klaus

    2014-01-01

    of letters just those which belong to the useful, i.e., valid, modes. The set of codes of valid modes turns out to be a so-called "regular" language (in the sense of formal-language-theory). Leibniz' formalization of syllogistics in his Dissertatio thus contains an estimation of the computational complexity...

  12. Efficient quantum computing with weak measurements

    International Nuclear Information System (INIS)

    Lund, A P

    2011-01-01

    Projective measurements with high quantum efficiency are often assumed to be required for efficient circuit-based quantum computing. We argue that this is not the case and show that the fact that they are not required was actually known previously but was not deeply explored. We examine this issue by giving an example of how to perform the quantum-ordering-finding algorithm efficiently using non-local weak measurements considering that the measurements used are of bounded weakness and some fixed but arbitrary probability of success less than unity is required. We also show that it is possible to perform the same computation with only local weak measurements, but this must necessarily introduce an exponential overhead.

  13. Computational efficiency for the surface renewal method

    Science.gov (United States)

    Kelley, Jason; Higgins, Chad

    2018-04-01

    Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.

  14. Y-formalism and b ghost in the non-minimal pure spinor formalism of superstrings

    International Nuclear Information System (INIS)

    Oda, Ichiro; Tonin, Mario

    2007-01-01

    We present the Y-formalism for the non-minimal pure spinor quantization of superstrings. In the framework of this formalism we compute, at the quantum level, the explicit form of the compound operators involved in the construction of the b ghost, their normal-ordering contributions and the relevant relations among them. We use these results to construct the quantum-mechanical b ghost in the non-minimal pure spinor formalism. Moreover we show that this non-minimal b ghost is cohomologically equivalent to the non-covariant b ghost

  15. Fundamentals of the Pure Spinor Formalism

    CERN Document Server

    Hoogeveen, Joost

    2010-01-01

    This thesis presents recent developments within the pure spinor formalism, which has simplified amplitude computations in perturbative string theory, especially when spacetime fermions are involved. Firstly the worldsheet action of both the minimal and the non-minimal pure spinor formalism is derived from first principles, i.e. from an action with two dimensional diffeomorphism and Weyl invariance. Secondly the decoupling of unphysical states in the minimal pure spinor formalism is proved

  16. Formal Methods for Life-Critical Software

    Science.gov (United States)

    Butler, Ricky W.; Johnson, Sally C.

    1993-01-01

    The use of computer software in life-critical applications, such as for civil air transports, demands the use of rigorous formal mathematical verification procedures. This paper demonstrates how to apply formal methods to the development and verification of software by leading the reader step-by-step through requirements analysis, design, implementation, and verification of an electronic phone book application. The current maturity and limitations of formal methods tools and techniques are then discussed, and a number of examples of the successful use of formal methods by industry are cited.

  17. The Efficiency of Requesting Process for Formal Business-Documents in Indonesia: An Implementation of Web Application Base on Secure and Encrypted Sharing Process

    Directory of Open Access Journals (Sweden)

    Aris Budi Setyawan

    2014-12-01

    Full Text Available In recent business practices, the need of the formal document for business, such as the business license documents, business domicile letters, halal certificates, and other formal documents, is desperately needed and becomes its own problems for businesses, especially for small and medium enterprises. One stop service unit that was conceived and implemented by the government today, has not been fully integrated yet. Not all permits (related with formal document for business can be completed and finished in one place, businesses are still have to move from one government department to another government department to get a formal document for their business. With these practices, not only a lot of the time and cost will be sacrificed, but also businesses must always fill out a form with the same field. This study aims to assess and identify the problem, especially on applying the formal document for business, and use it as inputs for the development of a web application based on secure and encrypted sharing process. The study starts with a survey of 200 businesses that have applied the formal document for their business, to map the initial conditions of applying the formal document for business in Indonesia . With these applications that are built based on these needs, it is expected that not only the time, cost, and physical effort from both parties are becoming more efficient, but also the negative practices of bureaucratic and economic obstacles in business activities can be minimized, so the competitiveness of business and their contribution for national economy will increase.Keywords : Formal documents, Efficiencies, Web application, Secure and encrypted sharing process, SMEs

  18. Y-formalism and curved {beta}-{gamma} systems

    Energy Technology Data Exchange (ETDEWEB)

    Grassi, Pietro Antonio [DISTA, Universita del Piemonte Orientale, via Bellini 25/g, 15100 Alessandria (Italy); INFN - Sezione di Torino (Italy)], E-mail: antonio.pietro.grassi@cern.ch; Oda, Ichiro [Department of Physics, Faculty of Science, University of the Ryukyus, Nishihara, Okinawa 903-0213 (Japan); Tonin, Mario [Dipartimento di Fisica, Universita degli Studi di Padova, INFN, Sezionedi Padova, Via F. Marzolo 8, 35131 Padova (Italy)

    2009-01-01

    We adopt the Y-formalism to study {beta}-{gamma} systems on hypersurfaces. We compute the operator product expansions of gauge-invariant currents and we discuss some applications of the Y-formalism to model on Calabi-Yau spaces.

  19. Y-formalism and curved β-γ systems

    International Nuclear Information System (INIS)

    Grassi, Pietro Antonio; Oda, Ichiro; Tonin, Mario

    2009-01-01

    We adopt the Y-formalism to study β-γ systems on hypersurfaces. We compute the operator product expansions of gauge-invariant currents and we discuss some applications of the Y-formalism to model on Calabi-Yau spaces

  20. Software Components and Formal Methods from a Computational Viewpoint

    OpenAIRE

    Lambertz, Christian

    2012-01-01

    Software components and the methodology of component-based development offer a promising approach to master the design complexity of huge software products because they separate the concerns of software architecture from individual component behavior and allow for reusability of components. In combination with formal methods, the specification of a formal component model of the later software product or system allows for establishing and verifying important system properties in an automatic a...

  1. Pemodelan dan Verifikasi Formal Protokol EE-OLSR dengan UPPAAL CORA

    Directory of Open Access Journals (Sweden)

    Rachmat Wahid Saleh Insani

    2016-01-01

    Protocol verification process generally be done by simulation and testing. However, these processes unable to verify there are no subtle error or design flaw in protocol. Model Checking is an algorithmic method runs in fully automatic to verify a system. UPPAAL is a model checker tool to model, verify, and simulate a system in Timed Automata. UPPAAL CORA is model checker tool to verify EE-OLSR protocol modelled in Linearly Priced Timed Automata, if the protocol satisfy the energy efficient property formulated by formal specification language in Weighted Computation Tree Logic syntax. Model Checking Technique to verify the protocols results in the protocol is satisfy the energy efficient property only when the packet transmission traffic happens.

  2. Formalization of common power and efficiency definitions for energy-converting intracellular biochemical processes

    Energy Technology Data Exchange (ETDEWEB)

    Santillan, M.; Angulo-Brown, F.; Chavoya-Aceves, O. [Instituto Politecnico Nacional, Mexico, D. F. (Mexico)

    2001-04-01

    The definitions of power and efficiency for energy-converting intracellular biochemical processes, introduced by Caplan and Essig are studied. These definitions are recovered in the present work with the formalism of De Groot and Mazure for First-Order Irreversible Thermodynamics, rather than the formalism of Prigogine, as done by Caplan and Essig. The approach here employed permits to keep track of all the assumptions in a more clear manner, and to get rid of a very strong restriction in the approach of Caplan and Essig which assumes that the chemical potentials are homogeneous inside the cell. [Spanish] Se estudian las definiciones de potencia y eficiencia para procesos bioquimicos intracelulares convertidores de energia, introducidas por Caplan y Essig. En el presente trabajo, dichas definiciones se recuperan usando el formalismo de De Groot y Mazur para la termodinamica irreversible de primer orden, en vez del formalismo de Prigogine, empleado por Caplan y Essig. El punto de vista empleado en el presente manuscrito permite seguir las suposiciones hechas de una manera mas clara, ademas de que hace innecesaria una suposicion bastante fuerte usada por Caplan y Essig, la cual da por hecho que los potenciales quimicos son homogeneos en el interior de la celula.

  3. A Distributed Snapshot Protocol for Efficient Artificial Intelligence Computation in Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    JongBeom Lim

    2018-01-01

    Full Text Available Many artificial intelligence applications often require a huge amount of computing resources. As a result, cloud computing adoption rates are increasing in the artificial intelligence field. To support the demand for artificial intelligence applications and guarantee the service level agreement, cloud computing should provide not only computing resources but also fundamental mechanisms for efficient computing. In this regard, a snapshot protocol has been used to create a consistent snapshot of the global state in cloud computing environments. However, the existing snapshot protocols are not optimized in the context of artificial intelligence applications, where large-scale iterative computation is the norm. In this paper, we present a distributed snapshot protocol for efficient artificial intelligence computation in cloud computing environments. The proposed snapshot protocol is based on a distributed algorithm to run interconnected multiple nodes in a scalable fashion. Our snapshot protocol is able to deal with artificial intelligence applications, in which a large number of computing nodes are running. We reveal that our distributed snapshot protocol guarantees the correctness, safety, and liveness conditions.

  4. Efficient computation of Laguerre polynomials

    NARCIS (Netherlands)

    A. Gil (Amparo); J. Segura (Javier); N.M. Temme (Nico)

    2017-01-01

    textabstractAn efficient algorithm and a Fortran 90 module (LaguerrePol) for computing Laguerre polynomials . Ln(α)(z) are presented. The standard three-term recurrence relation satisfied by the polynomials and different types of asymptotic expansions valid for . n large and . α small, are used

  5. Efficient GPU-based skyline computation

    DEFF Research Database (Denmark)

    Bøgh, Kenneth Sejdenfaden; Assent, Ira; Magnani, Matteo

    2013-01-01

    The skyline operator for multi-criteria search returns the most interesting points of a data set with respect to any monotone preference function. Existing work has almost exclusively focused on efficiently computing skylines on one or more CPUs, ignoring the high parallelism possible in GPUs. In...

  6. Efficient and anonymous two-factor user authentication in wireless sensor networks: achieving user anonymity with lightweight sensor computation.

    Science.gov (United States)

    Nam, Junghyun; Choo, Kim-Kwang Raymond; Han, Sangchul; Kim, Moonseong; Paik, Juryon; Won, Dongho

    2015-01-01

    A smart-card-based user authentication scheme for wireless sensor networks (hereafter referred to as a SCA-WSN scheme) is designed to ensure that only users who possess both a smart card and the corresponding password are allowed to gain access to sensor data and their transmissions. Despite many research efforts in recent years, it remains a challenging task to design an efficient SCA-WSN scheme that achieves user anonymity. The majority of published SCA-WSN schemes use only lightweight cryptographic techniques (rather than public-key cryptographic techniques) for the sake of efficiency, and have been demonstrated to suffer from the inability to provide user anonymity. Some schemes employ elliptic curve cryptography for better security but require sensors with strict resource constraints to perform computationally expensive scalar-point multiplications; despite the increased computational requirements, these schemes do not provide user anonymity. In this paper, we present a new SCA-WSN scheme that not only achieves user anonymity but also is efficient in terms of the computation loads for sensors. Our scheme employs elliptic curve cryptography but restricts its use only to anonymous user-to-gateway authentication, thereby allowing sensors to perform only lightweight cryptographic operations. Our scheme also enjoys provable security in a formal model extended from the widely accepted Bellare-Pointcheval-Rogaway (2000) model to capture the user anonymity property and various SCA-WSN specific attacks (e.g., stolen smart card attacks, node capture attacks, privileged insider attacks, and stolen verifier attacks).

  7. Efficient and anonymous two-factor user authentication in wireless sensor networks: achieving user anonymity with lightweight sensor computation.

    Directory of Open Access Journals (Sweden)

    Junghyun Nam

    Full Text Available A smart-card-based user authentication scheme for wireless sensor networks (hereafter referred to as a SCA-WSN scheme is designed to ensure that only users who possess both a smart card and the corresponding password are allowed to gain access to sensor data and their transmissions. Despite many research efforts in recent years, it remains a challenging task to design an efficient SCA-WSN scheme that achieves user anonymity. The majority of published SCA-WSN schemes use only lightweight cryptographic techniques (rather than public-key cryptographic techniques for the sake of efficiency, and have been demonstrated to suffer from the inability to provide user anonymity. Some schemes employ elliptic curve cryptography for better security but require sensors with strict resource constraints to perform computationally expensive scalar-point multiplications; despite the increased computational requirements, these schemes do not provide user anonymity. In this paper, we present a new SCA-WSN scheme that not only achieves user anonymity but also is efficient in terms of the computation loads for sensors. Our scheme employs elliptic curve cryptography but restricts its use only to anonymous user-to-gateway authentication, thereby allowing sensors to perform only lightweight cryptographic operations. Our scheme also enjoys provable security in a formal model extended from the widely accepted Bellare-Pointcheval-Rogaway (2000 model to capture the user anonymity property and various SCA-WSN specific attacks (e.g., stolen smart card attacks, node capture attacks, privileged insider attacks, and stolen verifier attacks.

  8. A brief overview of NASA Langley's research program in formal methods

    Science.gov (United States)

    1992-01-01

    An overview of NASA Langley's research program in formal methods is presented. The major goal of this work is to bring formal methods technology to a sufficiently mature level for use by the United States aerospace industry. Towards this goal, work is underway to design and formally verify a fault-tolerant computing platform suitable for advanced flight control applications. Also, several direct technology transfer efforts have been initiated that apply formal methods to critical subsystems of real aerospace computer systems. The research team consists of six NASA civil servants and contractors from Boeing Military Aircraft Company, Computational Logic Inc., Odyssey Research Associates, SRI International, University of California at Davis, and Vigyan Inc.

  9. Efficient Secure Multiparty Subset Computation

    Directory of Open Access Journals (Sweden)

    Sufang Zhou

    2017-01-01

    Full Text Available Secure subset problem is important in secure multiparty computation, which is a vital field in cryptography. Most of the existing protocols for this problem can only keep the elements of one set private, while leaking the elements of the other set. In other words, they cannot solve the secure subset problem perfectly. While a few studies have addressed actual secure subsets, these protocols were mainly based on the oblivious polynomial evaluations with inefficient computation. In this study, we first design an efficient secure subset protocol for sets whose elements are drawn from a known set based on a new encoding method and homomorphic encryption scheme. If the elements of the sets are taken from a large domain, the existing protocol is inefficient. Using the Bloom filter and homomorphic encryption scheme, we further present an efficient protocol with linear computational complexity in the cardinality of the large set, and this is considered to be practical for inputs consisting of a large number of data. However, the second protocol that we design may yield a false positive. This probability can be rapidly decreased by reexecuting the protocol with different hash functions. Furthermore, we present the experimental performance analyses of these protocols.

  10. Formalization of Many-Valued Logics

    DEFF Research Database (Denmark)

    Villadsen, Jørgen; Schlichtkrull, Anders

    2017-01-01

    Partiality is a key challenge for computational approaches to artificial intelligence in general and natural language in particular. Various extensions of classical two-valued logic to many-valued logics have been investigated in order to meet this challenge. We use the proof assistant Isabelle...... to formalize the syntax and semantics of many-valued logics with determinate as well as indeterminate truth values. The formalization allows for a concise presentation and makes automated verification possible....

  11. Helicity formalism and spin effects

    International Nuclear Information System (INIS)

    Anselmino, M.; Caruso, F.; Piovano, U.

    1990-01-01

    The helicity formalism and the technique to compute amplitudes for interaction processes involving leptons, quarks, photons and gluons are reviewed. Explicit calculations and examples of exploitation of symmetry properties are shown. The formalism is then applied to the discussion of several hadronic processes and spin effects: the experimental data, when related to the properties of the elementary constituent interactions, show many not understood features. Also the nucleon spin problem is briefly reviewed. (author)

  12. A Formal Model of Trust Chain based on Multi-level Security Policy

    OpenAIRE

    Kong Xiangying

    2013-01-01

    Trust chain is the core technology of trusted computing. A formal model of trust chain based on finite state automata theory is proposed. We use communicating sequential processes to describe the system state transition in trust chain and by combining with multi-level security strategy give the definition of trust system and trust decision theorem of trust chain transfer which is proved meantime. Finally, a prototype system is given to show the efficiency of the model.

  13. Generalized Bondi-Sachs equations for characteristic formalism of numerical relativity

    Science.gov (United States)

    Cao, Zhoujian; He, Xiaokai

    2013-11-01

    The Cauchy formalism of numerical relativity has been successfully applied to simulate various dynamical spacetimes without any symmetry assumption. But discovering how to set a mathematically consistent and physically realistic boundary condition is still an open problem for Cauchy formalism. In addition, the numerical truncation error and finite region ambiguity affect the accuracy of gravitational wave form calculation. As to the finite region ambiguity issue, the characteristic extraction method helps much. But it does not solve all of the above issues. Besides the above problems for Cauchy formalism, the computational efficiency is another problem. Although characteristic formalism of numerical relativity suffers the difficulty from caustics in the inner near zone, it has advantages in relation to all of the issues listed above. Cauchy-characteristic matching (CCM) is a possible way to take advantage of characteristic formalism regarding these issues and treat the inner caustics at the same time. CCM has difficulty treating the gauge difference between the Cauchy part and the characteristic part. We propose generalized Bondi-Sachs equations for characteristic formalism for the Cauchy-characteristic matching end. Our proposal gives out a possible same numerical evolution scheme for both the Cauchy part and the characteristic part. And our generalized Bondi-Sachs equations have one adjustable gauge freedom which can be used to relate the gauge used in the Cauchy part. Then these equations can make the Cauchy part and the characteristic part share a consistent gauge condition. So our proposal gives a possible new starting point for Cauchy-characteristic matching.

  14. Weyl-van-der-Waerden formalism for helicity amplitudes of massive particles

    CERN Document Server

    Dittmaier, Stefan

    1999-01-01

    The Weyl-van-der-Waerden spinor technique for calculating helicity amplitudes of massive and massless particles is presented in a form that is particularly well suited to a direct implementation in computer algebra. Moreover, we explain how to exploit discrete symmetries and how to avoid unphysical poles in amplitudes in practice. The efficiency of the formalism is demonstrated by giving explicit compact results for the helicity amplitudes of the processes gamma gamma -> f fbar, f fbar -> gamma gamma gamma, mu^- mu^+ -> f fbar gamma.

  15. A computationally efficient approach for template matching-based ...

    Indian Academy of Sciences (India)

    In this paper, a new computationally efficient image registration method is ...... the proposed method requires less computational time as compared to traditional methods. ... Zitová B and Flusser J 2003 Image registration methods: A survey.

  16. Formalization of Medical Guidelines

    Czech Academy of Sciences Publication Activity Database

    Peleška, Jan; Anger, Z.; Buchtela, David; Šebesta, K.; Tomečková, Marie; Veselý, Arnošt; Zvára, K.; Zvárová, Jana

    2005-01-01

    Roč. 1, - (2005), s. 133-141 ISSN 1801-5603 R&D Projects: GA AV ČR 1ET200300413 Institutional research plan: CEZ:AV0Z10300504 Keywords : GLIF model * formalization of guidelines * prevention of cardiovascular diseases Subject RIV: IN - Informatics, Computer Science

  17. A formalization of the Berlekamp-Zassenhaus factorization algorithm

    NARCIS (Netherlands)

    Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa

    2017-01-01

    We formalize the Berlekamp–Zassenhaus algorithm for factoring square-free integer polynomials in Isabelle/HOL. We further adapt an existing formalization of Yun’s square-free factorization algorithm to integer polynomials, and thus provide an efficient and certified factorization algorithm for

  18. Formal Methods and Safety Certification: Challenges in the Railways Domain

    DEFF Research Database (Denmark)

    Fantechi, Alessandro; Ferrari, Alessio; Gnesi, Stefania

    2016-01-01

    The railway signalling sector has historically been a source of success stories about the adoption of formal methods in the certification of software safety of computer-based control equipment.......The railway signalling sector has historically been a source of success stories about the adoption of formal methods in the certification of software safety of computer-based control equipment....

  19. Machine Learning-based Intelligent Formal Reasoning and Proving System

    Science.gov (United States)

    Chen, Shengqing; Huang, Xiaojian; Fang, Jiaze; Liang, Jia

    2018-03-01

    The reasoning system can be used in many fields. How to improve reasoning efficiency is the core of the design of system. Through the formal description of formal proof and the regular matching algorithm, after introducing the machine learning algorithm, the system of intelligent formal reasoning and verification has high efficiency. The experimental results show that the system can verify the correctness of propositional logic reasoning and reuse the propositional logical reasoning results, so as to obtain the implicit knowledge in the knowledge base and provide the basic reasoning model for the construction of intelligent system.

  20. Convolutional networks for fast, energy-efficient neuromorphic computing.

    Science.gov (United States)

    Esser, Steven K; Merolla, Paul A; Arthur, John V; Cassidy, Andrew S; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J; McKinstry, Jeffrey L; Melano, Timothy; Barch, Davis R; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D; Modha, Dharmendra S

    2016-10-11

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.

  1. Pure spinor formalism as an N = 2 topological string

    International Nuclear Information System (INIS)

    Berkovits, Nathan

    2005-01-01

    Following suggestions of Nekrasov and Siegel, a non-minimal set of fields are added to the pure spinor formalism for the superstring. Twisted c-circumflex = 3 N = 2 generators are then constructed where the pure spinor BRST operator is the fermionic spin-one generator, and the formalism is interpreted as a critical topological string. Three applications of this topological string theory include the super-Poincare covariant computation of multiloop superstring amplitudes without picture-changing operators, the construction of a cubic open superstring field theory without contact-term problems, and a new four-dimensional version of the pure spinor formalism which computes F-terms in the spacetime action

  2. Modeling of requirement specification for safety critical real time computer system using formal mathematical specifications

    International Nuclear Information System (INIS)

    Sankar, Bindu; Sasidhar Rao, B.; Ilango Sambasivam, S.; Swaminathan, P.

    2002-01-01

    Full text: Real time computer systems are increasingly used for safety critical supervision and control of nuclear reactors. Typical application areas are supervision of reactor core against coolant flow blockage, supervision of clad hot spot, supervision of undesirable power excursion, power control and control logic for fuel handling systems. The most frequent cause of fault in safety critical real time computer system is traced to fuzziness in requirement specification. To ensure the specified safety, it is necessary to model the requirement specification of safety critical real time computer systems using formal mathematical methods. Modeling eliminates the fuzziness in the requirement specification and also helps to prepare the verification and validation schemes. Test data can be easily designed from the model of the requirement specification. Z and B are the popular languages used for modeling the requirement specification. A typical safety critical real time computer system for supervising the reactor core of prototype fast breeder reactor (PFBR) against flow blockage is taken as case study. Modeling techniques and the actual model are explained in detail. The advantages of modeling for ensuring the safety are summarized

  3. Formal Analysis Of Use Case Diagrams

    Directory of Open Access Journals (Sweden)

    Radosław Klimek

    2010-01-01

    Full Text Available Use case diagrams play an important role in modeling with UML. Careful modeling is crucialin obtaining a correct and efficient system architecture. The paper refers to the formalanalysis of the use case diagrams. A formal model of use cases is proposed and its constructionfor typical relationships between use cases is described. Two methods of formal analysis andverification are presented. The first one based on a states’ exploration represents a modelchecking approach. The second one refers to the symbolic reasoning using formal methodsof temporal logic. Simple but representative example of the use case scenario verification isdiscussed.

  4. Arx: a toolset for the efficient simulation and direct synthesis of high-performance signal processing algorithms

    NARCIS (Netherlands)

    Hofstra, K.L.; Gerez, Sabih H.

    2007-01-01

    This paper addresses the efficient implementation of highperformance signal-processing algorithms. In early stages of such designs many computation-intensive simulations may be necessary. This calls for hardware description formalisms targeted for efficient simulation (such as the programming

  5. Formalizing physical security procedures

    NARCIS (Netherlands)

    Meadows, C.; Pavlovic, Dusko

    Although the problems of physical security emerged more than 10,000 years before the problems of computer security, no formal methods have been developed for them, and the solutions have been evolving slowly, mostly through social procedures. But as the traffic on physical and social networks is now

  6. Turchin's Relation for Call-by-Name Computations: A Formal Approach

    OpenAIRE

    Antonina Nepeivoda

    2016-01-01

    Supercompilation is a program transformation technique that was first described by V. F. Turchin in the 1970s. In supercompilation, Turchin's relation as a similarity relation on call-stack configurations is used both for call-by-value and call-by-name semantics to terminate unfolding of the program being transformed. In this paper, we give a formal grammar model of call-by-name stack behaviour. We classify the model in terms of the Chomsky hierarchy and then formally prove that Turchin's rel...

  7. Energy efficiency of computer power supply units - Final report

    Energy Technology Data Exchange (ETDEWEB)

    Aebischer, B. [cepe - Centre for Energy Policy and Economics, Swiss Federal Institute of Technology Zuerich, Zuerich (Switzerland); Huser, H. [Encontrol GmbH, Niederrohrdorf (Switzerland)

    2002-11-15

    This final report for the Swiss Federal Office of Energy (SFOE) takes a look at the efficiency of computer power supply units, which decreases rapidly during average computer use. The background and the purpose of the project are examined. The power supplies for personal computers are discussed and the testing arrangement used is described. Efficiency, power-factor and operating points of the units are examined. Potentials for improvement and measures to be taken are discussed. Also, action to be taken by those involved in the design and operation of such power units is proposed. Finally, recommendations for further work are made.

  8. Convolutional networks for fast, energy-efficient neuromorphic computing

    Science.gov (United States)

    Esser, Steven K.; Merolla, Paul A.; Arthur, John V.; Cassidy, Andrew S.; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J.; McKinstry, Jeffrey L.; Melano, Timothy; Barch, Davis R.; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D.; Modha, Dharmendra S.

    2016-01-01

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer. PMID:27651489

  9. Computer Architecture Techniques for Power-Efficiency

    CERN Document Server

    Kaxiras, Stefanos

    2008-01-01

    In the last few years, power dissipation has become an important design constraint, on par with performance, in the design of new computer systems. Whereas in the past, the primary job of the computer architect was to translate improvements in operating frequency and transistor count into performance, now power efficiency must be taken into account at every step of the design process. While for some time, architects have been successful in delivering 40% to 50% annual improvement in processor performance, costs that were previously brushed aside eventually caught up. The most critical of these

  10. Traditional and formal education: Means of improving grasscutter ...

    African Journals Online (AJOL)

    The study concludes that both traditional and non-formal education are important for the development and efficiency of grasscutter farming in Ogun Waterside Local Government Area of Ogun State. The following are the recommendations of the study: revision of the curriculum of formal schools to include items that inculcate ...

  11. Conceptual graph grammar--a simple formalism for sublanguage.

    Science.gov (United States)

    Johnson, S B

    1998-11-01

    There are a wide variety of computer applications that deal with various aspects of medical language: concept representation, controlled vocabulary, natural language processing, and information retrieval. While technical and theoretical methods appear to differ, all approaches investigate different aspects of the same phenomenon: medical sublanguage. This paper surveys the properties of medical sublanguage from a formal perspective, based on detailed analyses cited in the literature. A review of several computer systems based on sublanguage approaches shows some of the difficulties in addressing the interaction between the syntactic and semantic aspects of sublanguage. A formalism called Conceptual Graph Grammar is presented that attempts to combine both syntax and semantics into a single notation by extending standard Conceptual Graph notation. Examples from the domain of pathology diagnoses are provided to illustrate the use of this formalism in medical language analysis. The strengths and weaknesses of the approach are then considered. Conceptual Graph Grammar is an attempt to synthesize the common properties of different approaches to sublanguage into a single formalism, and to begin to define a common foundation for language-related research in medical informatics.

  12. Computing with memory for energy-efficient robust systems

    CERN Document Server

    Paul, Somnath

    2013-01-01

    This book analyzes energy and reliability as major challenges faced by designers of computing frameworks in the nanometer technology regime.  The authors describe the existing solutions to address these challenges and then reveal a new reconfigurable computing platform, which leverages high-density nanoscale memory for both data storage and computation to maximize the energy-efficiency and reliability. The energy and reliability benefits of this new paradigm are illustrated and the design challenges are discussed. Various hardware and software aspects of this exciting computing paradigm are de

  13. Formality of the Chinese collective leadership.

    Science.gov (United States)

    Li, Haiying; Graesser, Arthur C

    2016-09-01

    We investigated the linguistic patterns in the discourse of four generations of the collective leadership of the Communist Party of China (CPC) from 1921 to 2012. The texts of Mao Zedong, Deng Xiaoping, Jiang Zemin, and Hu Jintao were analyzed using computational linguistic techniques (a Chinese formality score) to explore the persuasive linguistic features of the leaders in the contexts of power phase, the nation's education level, power duration, and age. The study was guided by the elaboration likelihood model of persuasion, which includes a central route (represented by formal discourse) versus a peripheral route (represented by informal discourse) to persuasion. The results revealed that these leaders adopted the formal, central route more when they were in power than before they came into power. The nation's education level was a significant factor in the leaders' adoption of the persuasion strategy. The leaders' formality also decreased with their increasing age and in-power times. However, the predictability of these factors for formality had subtle differences among the different types of leaders. These results enhance our understanding of the Chinese collective leadership and the role of formality in politically persuasive messages.

  14. Improved object optimal synthetic description, modeling, learning, and discrimination by GEOGINE computational kernel

    Science.gov (United States)

    Fiorini, Rodolfo A.; Dacquino, Gianfranco

    2005-03-01

    GEOGINE (GEOmetrical enGINE), a state-of-the-art OMG (Ontological Model Generator) based on n-D Tensor Invariants for n-Dimensional shape/texture optimal synthetic representation, description and learning, was presented in previous conferences elsewhere recently. Improved computational algorithms based on the computational invariant theory of finite groups in Euclidean space and a demo application is presented. Progressive model automatic generation is discussed. GEOGINE can be used as an efficient computational kernel for fast reliable application development and delivery in advanced biomedical engineering, biometric, intelligent computing, target recognition, content image retrieval, data mining technological areas mainly. Ontology can be regarded as a logical theory accounting for the intended meaning of a formal dictionary, i.e., its ontological commitment to a particular conceptualization of the world object. According to this approach, "n-D Tensor Calculus" can be considered a "Formal Language" to reliably compute optimized "n-Dimensional Tensor Invariants" as specific object "invariant parameter and attribute words" for automated n-Dimensional shape/texture optimal synthetic object description by incremental model generation. The class of those "invariant parameter and attribute words" can be thought as a specific "Formal Vocabulary" learned from a "Generalized Formal Dictionary" of the "Computational Tensor Invariants" language. Even object chromatic attributes can be effectively and reliably computed from object geometric parameters into robust colour shape invariant characteristics. As a matter of fact, any highly sophisticated application needing effective, robust object geometric/colour invariant attribute capture and parameterization features, for reliable automated object learning and discrimination can deeply benefit from GEOGINE progressive automated model generation computational kernel performance. Main operational advantages over previous

  15. A new approach for formal behavioral modeling of protection services in antivirus systems

    OpenAIRE

    Norouzi, Monire; Parsa, Saeed; Mahjur, Ali

    2014-01-01

    Formal method techniques provides a suitable platform for the software development in software systems. Formal methods and formal verification is necessary to prove the correctness and improve performance of software systems in various levels of design and implementation, too. Security Discussion is an important issue in computer systems. Since the antivirus applications have very important role in computer systems security, verifying these applications is very essential and necessary. In thi...

  16. Efficiency using computer simulation of Reverse Threshold Model Theory on assessing a “One Laptop Per Child” computer versus desktop computer

    Directory of Open Access Journals (Sweden)

    Supat Faarungsang

    2017-04-01

    Full Text Available The Reverse Threshold Model Theory (RTMT model was introduced based on limiting factor concepts, but its efficiency compared to the Conventional Model (CM has not been published. This investigation assessed the efficiency of RTMT compared to CM using computer simulation on the “One Laptop Per Child” computer and a desktop computer. Based on probability values, it was found that RTMT was more efficient than CM among eight treatment combinations and an earlier study verified that RTMT gives complete elimination of random error. Furthermore, RTMT has several advantages over CM and is therefore proposed to be applied to most research data.

  17. Efficient MATLAB computations with sparse and factored tensors.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Kolda, Tamara Gibson (Sandia National Lab, Livermore, CA)

    2006-12-01

    In this paper, the term tensor refers simply to a multidimensional or N-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: a Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank-1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.

  18. Formal specification is an experimental science

    Energy Technology Data Exchange (ETDEWEB)

    Bjorner, D. [Technical Univ., Lyngby (Denmark)

    1992-09-01

    Traditionally, abstract models of large, complex systems have been given in free-form mathematics, combining - often in ad-hoc, not formally supported ways - notions from the disciplines of partial differential equations, functional analysis, mathematical statistics, etc. Such models have been very useful for assimilation of information, analysis (investigation), and prediction (simulation). These models have, however, usually not been helpful in deriving computer representations of the modelled systems - for the purposes of computerized monitoring and control, Computing science, concerned with how to construct objects that can exist within the computer, offers ways of complementing, and in some cases, replacing or combining traditional mathematical models. Formal, model-, as well as property-oriented, specifications in the styles of denotational (respectively, algebraic semantics) represent major approaches to such modelling. In this expository, discursive paper we illustrate what we mean by model-oriented specifications of large, complex technological computing systems. The three modelling examples covers the introvert programming methodological subject of SDEs: software development environments, the distributed computing system subject of wfs`s: (transaction) work flow systems, and the extrovert subject of robots: robotics! the thesis is, just as for mathematical modelling, that we can derive much understanding, etc., from experimentally creating such formally specified models - on paper - and that we gain little in additionally building ad-hoc prototypes. Our models are expressed in a model-oriented style using the VDM specification language Meta-IV In this paper the models only reflect the {open_quotes}data modelling{close_quotes} aspects. We observe that such data models are more easily captured in the model-oriented siyle than in the algebraic semantics property-oriented style which originally was built of the abstraction of operations. 101 refs., 4 figs.

  19. The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency

    Science.gov (United States)

    Oder, Karl; Pittman, Stephanie

    2015-01-01

    Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…

  20. Generative Graph Grammar of Neo-Vaiśeṣika Formal Ontology (NVFO)

    Science.gov (United States)

    Tavva, Rajesh; Singh, Navjyoti

    NLP applications for Sanskrit so far work within computational paradigm of string grammars. However, to compute 'meanings', as in traditional śā bdabodha prakriyā-s, there is a need to develop suitable graph grammars. Ontological structures are fundamentally graphs. We work within the formal framework of Neo-Vaiśeṣika Formal Ontology (NVFO) to propose a generative graph grammar. The proposed formal grammar only produces well-formed graphs that can be readily interpreted in accordance with Vaiśeṣ ika Ontology. We show that graphs not permitted by Vaiśeṣ ika ontology are not generated by the proposed grammar. Further, we write Interpreter of these graphical structures. This creates computational environment which can be deployed for writing computational applications of Vaiśeṣ ika ontology. We illustrate how this environment can be used to create applications like computing śā bdabodha of sentences.

  1. Towards a Formal Model of Social Data

    DEFF Research Database (Denmark)

    Mukkamala, Raghava Rao; Vatrapu, Ravi; Hussain, Abid

    , transform, analyse, and report social data from social media platforms such as Facebook and twitter. Formal methods, models and tools for social data are largely limited to graph theoretical approaches informing conceptual developments in relational sociology and methodological developments in social...... network analysis. As far as we know, there are no integrated modeling approaches to social data across the conceptual, formal and software realms. Social media analytics can be undertaken in two main ways - ”Social Graph Analytics” and ”Social Text Analytics” (Vatrapu, in press/2013). Social graph......, we exemplify the semantics of the formal model with real-world social data examples. Third, we briefly present and discuss the Social Data Analytics Tool (SODATO) that realizes the conceptual model in software and provisions social data for computational social science analysis based on the formal...

  2. Formal Analysis of Graphical Security Models

    DEFF Research Database (Denmark)

    Aslanyan, Zaruhi

    , software components and human actors interacting with each other to form so-called socio-technical systems. The importance of socio-technical systems to modern societies requires verifying their security properties formally, while their inherent complexity makes manual analyses impracticable. Graphical...... models for security offer an unrivalled opportunity to describe socio-technical systems, for they allow to represent different aspects like human behaviour, computation and physical phenomena in an abstract yet uniform manner. Moreover, these models can be assigned a formal semantics, thereby allowing...... formal verification of their properties. Finally, their appealing graphical notations enable to communicate security concerns in an understandable way also to non-experts, often in charge of the decision making. This dissertation argues that automated techniques can be developed on graphical security...

  3. Computationally Efficient Clustering of Audio-Visual Meeting Data

    Science.gov (United States)

    Hung, Hayley; Friedland, Gerald; Yeo, Chuohao

    This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors, comprising a limited number of cameras and microphones. We first demonstrate computationally efficient algorithms that can identify who spoke and when, a problem in speech processing known as speaker diarization. We also extract visual activity features efficiently from MPEG4 video by taking advantage of the processing that was already done for video compression. Then, we present a method of associating the audio-visual data together so that the content of each participant can be managed individually. The methods presented in this article can be used as a principal component that enables many higher-level semantic analysis tasks needed in search, retrieval, and navigation.

  4. Formalized Linear Algebra over Elementary Divisor Rings in Coq

    OpenAIRE

    Cano , Guillaume; Cohen , Cyril; Dénès , Maxime; Mörtberg , Anders; Siles , Vincent

    2016-01-01

    International audience; This paper presents a Coq formalization of linear algebra over elementary divisor rings, that is, rings where every matrix is equivalent to a matrix in Smith normal form. The main results are the formalization that these rings support essential operations of linear algebra, the classification theorem of finitely pre-sented modules over such rings and the uniqueness of the Smith normal form up to multiplication by units. We present formally verified algorithms comput-in...

  5. Symbolic computation of exact solutions expressible in rational formal hyperbolic and elliptic functions for nonlinear partial differential equations

    International Nuclear Information System (INIS)

    Wang Qi; Chen Yong

    2007-01-01

    With the aid of symbolic computation, some algorithms are presented for the rational expansion methods, which lead to closed-form solutions of nonlinear partial differential equations (PDEs). The new algorithms are given to find exact rational formal polynomial solutions of PDEs in terms of Jacobi elliptic functions, solutions of the Riccati equation and solutions of the generalized Riccati equation. They can be implemented in symbolic computation system Maple. As applications of the methods, we choose some nonlinear PDEs to illustrate the methods. As a result, we not only can successfully obtain the solutions found by most existing Jacobi elliptic function methods and Tanh-methods, but also find other new and more general solutions at the same time

  6. On efficiency of fire simulation realization: parallelization with greater number of computational meshes

    Science.gov (United States)

    Valasek, Lukas; Glasa, Jan

    2017-12-01

    Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.

  7. A computationally efficient fuzzy control s

    Directory of Open Access Journals (Sweden)

    Abdel Badie Sharkawy

    2013-12-01

    Full Text Available This paper develops a decentralized fuzzy control scheme for MIMO nonlinear second order systems with application to robot manipulators via a combination of genetic algorithms (GAs and fuzzy systems. The controller for each degree of freedom (DOF consists of a feedforward fuzzy torque computing system and a feedback fuzzy PD system. The feedforward fuzzy system is trained and optimized off-line using GAs, whereas not only the parameters but also the structure of the fuzzy system is optimized. The feedback fuzzy PD system, on the other hand, is used to keep the closed-loop stable. The rule base consists of only four rules per each DOF. Furthermore, the fuzzy feedback system is decentralized and simplified leading to a computationally efficient control scheme. The proposed control scheme has the following advantages: (1 it needs no exact dynamics of the system and the computation is time-saving because of the simple structure of the fuzzy systems and (2 the controller is robust against various parameters and payload uncertainties. The computational complexity of the proposed control scheme has been analyzed and compared with previous works. Computer simulations show that this controller is effective in achieving the control goals.

  8. Representation, testing and assessment of the 'Estelle' formal description technique from a computer-controlled neutron scatter experiment

    International Nuclear Information System (INIS)

    Wolschke, U.

    1986-08-01

    Estelle is a formal method of description, which was developed based on an extended state transition model for the specification of communication records and services. Regardless of the field of application, there are problems common to all systems in distributed systems, i.e. in communication systems as in process computer systems, which are to be specified. These include real time problems, such as waiting for events, reactions to expected events and those occurring at the correct time, reacting to unexpected events or those not occurring at the correct time, transmitting and receiving data and the synchronisation of process going on simultaneously. This work examines, using the example of a process computer-controlled neutron scatter experiment, whether Estelle is suitable for the specification of distributed real time systems in this field of application. (orig.) [de

  9. A formalism for the calculus of variations with spinors

    Energy Technology Data Exchange (ETDEWEB)

    Bäckdahl, Thomas, E-mail: thobac@chalmers.se [The School of Mathematics, University of Edinburgh, JCMB 6228, Peter Guthrie Tait Road, Edinburgh EH9 3FD, United Kingdom and Mathematical Sciences - Chalmers University of Technology and University of Gothenburg - SE-412 96 Gothenburg (Sweden); Valiente Kroon, Juan A., E-mail: j.a.valiente-kroon@qmul.ac.uk [School of Mathematical Sciences, Queen Mary, University of London, Mile End Road, London E1 4NS (United Kingdom)

    2016-02-15

    We develop a frame and dyad gauge-independent formalism for the calculus of variations of functionals involving spinorial objects. As a part of this formalism, we define a modified variation operator which absorbs frame and spin dyad gauge terms. This formalism is applicable to both the standard spacetime (i.e., SL(2, ℂ)) 2-spinors as well as to space (i.e., SU(2, ℂ)) 2-spinors. We compute expressions for the variations of the connection and the curvature spinors.

  10. A formalism for the calculus of variations with spinors

    International Nuclear Information System (INIS)

    Bäckdahl, Thomas; Valiente Kroon, Juan A.

    2016-01-01

    We develop a frame and dyad gauge-independent formalism for the calculus of variations of functionals involving spinorial objects. As a part of this formalism, we define a modified variation operator which absorbs frame and spin dyad gauge terms. This formalism is applicable to both the standard spacetime (i.e., SL(2, ℂ)) 2-spinors as well as to space (i.e., SU(2, ℂ)) 2-spinors. We compute expressions for the variations of the connection and the curvature spinors

  11. Power-Efficient Computing: Experiences from the COSA Project

    Directory of Open Access Journals (Sweden)

    Daniele Cesini

    2017-01-01

    Full Text Available Energy consumption is today one of the most relevant issues in operating HPC systems for scientific applications. The use of unconventional computing systems is therefore of great interest for several scientific communities looking for a better tradeoff between time-to-solution and energy-to-solution. In this context, the performance assessment of processors with a high ratio of performance per watt is necessary to understand how to realize energy-efficient computing systems for scientific applications, using this class of processors. Computing On SOC Architecture (COSA is a three-year project (2015–2017 funded by the Scientific Commission V of the Italian Institute for Nuclear Physics (INFN, which aims to investigate the performance and the total cost of ownership offered by computing systems based on commodity low-power Systems on Chip (SoCs and high energy-efficient systems based on GP-GPUs. In this work, we present the results of the project analyzing the performance of several scientific applications on several GPU- and SoC-based systems. We also describe the methodology we have used to measure energy performance and the tools we have implemented to monitor the power drained by applications while running.

  12. A Formal Valuation Framework for Emotions and Their Control.

    Science.gov (United States)

    Huys, Quentin J M; Renz, Daniel

    2017-09-15

    Computational psychiatry aims to apply mathematical and computational techniques to help improve psychiatric care. To achieve this, the phenomena under scrutiny should be within the scope of formal methods. As emotions play an important role across many psychiatric disorders, such computational methods must encompass emotions. Here, we consider formal valuation accounts of emotions. We focus on the fact that the flexibility of emotional responses and the nature of appraisals suggest the need for a model-based valuation framework for emotions. However, resource limitations make plain model-based valuation impossible and require metareasoning strategies to apportion cognitive resources adaptively. We argue that emotions may implement such metareasoning approximations by restricting the range of behaviors and states considered. We consider the processes that guide the deployment of the approximations, discerning between innate, model-free, heuristic, and model-based controllers. A formal valuation and metareasoning framework may thus provide a principled approach to examining emotions. Copyright © 2017 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  13. Towards formalization of inspection using petrinets

    International Nuclear Information System (INIS)

    Javed, M.; Naeem, M.; Bahadur, F.; Wahab, A.

    2014-01-01

    Achieving better quality software has always been a challenge for software developers. Inspection is one of the most efficient techniques, which ensure the quality of software during its development. To the best of our knowledge, current inspection techniques are not realized by any formal approach. In this paper, we propose an inspection technique, which is not only backed by the formal mathematical semantics of Petri nets, but also supports inspecting concurrent processes. We also use a case study of an agent based distributed processing system to demonstrate the inspection of concurrent processes. (author)

  14. Efficient quantum walk on a quantum processor

    Science.gov (United States)

    Qiang, Xiaogang; Loke, Thomas; Montanaro, Ashley; Aungskunsiri, Kanin; Zhou, Xiaoqi; O'Brien, Jeremy L.; Wang, Jingbo B.; Matthews, Jonathan C. F.

    2016-01-01

    The random walk formalism is used across a wide range of applications, from modelling share prices to predicting population genetics. Likewise, quantum walks have shown much potential as a framework for developing new quantum algorithms. Here we present explicit efficient quantum circuits for implementing continuous-time quantum walks on the circulant class of graphs. These circuits allow us to sample from the output probability distributions of quantum walks on circulant graphs efficiently. We also show that solving the same sampling problem for arbitrary circulant quantum circuits is intractable for a classical computer, assuming conjectures from computational complexity theory. This is a new link between continuous-time quantum walks and computational complexity theory and it indicates a family of tasks that could ultimately demonstrate quantum supremacy over classical computers. As a proof of principle, we experimentally implement the proposed quantum circuit on an example circulant graph using a two-qubit photonics quantum processor. PMID:27146471

  15. Some tree-level string amplitudes in the NSR formalism

    International Nuclear Information System (INIS)

    Becker, Katrin; Becker, Melanie; Melnikov, Ilarion V.; Robbins, Daniel; Royston, Andrew B.

    2015-01-01

    We calculate tree level scattering amplitudes for open strings using the NSR formalism. We present a streamlined symmetry-based and pedagogical approach to the computations, which we first develop by checking two-, three-, and four-point functions involving bosons and fermions. We calculate the five-point amplitude for massless gluons and find agreement with an earlier result by Brandt, Machado and Medina. We then compute the five-point amplitudes involving two and four fermions respectively, the general form of which has not been previously obtained in the NSR formalism. The results nicely confirm expectations from the supersymmetric F 4 effective action. Finally we use the prescription of Kawai, Lewellen and Tye (KLT) to compute the amplitudes for the closed string sector.

  16. Towards a Formal Treatment of Implicit Invocation

    National Research Council Canada - National Science Library

    Dingel, J

    1997-01-01

    .... A formal computational model for implicit invocation is presented. We develop a verification framework for implicit invocation that is based on Jones' rely/guarantee reasoning for concurrent systems Jon83,St(phi)91...

  17. A Formal Model for Context-Awareness

    DEFF Research Database (Denmark)

    Kjærgaard, Mikkel Baun; Bunde-Pedersen, Jonathan

    here is a definite lack of formal support for modeling real- istic context-awareness in pervasive computing applications. The Conawa calculus presented in this paper provides mechanisms for modeling complex and interwoven sets of context-information by extending ambient calculus with new construc...

  18. On efficiently computing multigroup multi-layer neutron reflection and transmission conditions

    International Nuclear Information System (INIS)

    Abreu, Marcos P. de

    2007-01-01

    In this article, we present an algorithm for efficient computation of multigroup discrete ordinates neutron reflection and transmission conditions, which replace a multi-layered boundary region in neutron multiplication eigenvalue computations with no spatial truncation error. In contrast to the independent layer-by-layer algorithm considered thus far in our computations, the algorithm here is based on an inductive approach developed by the present author for deriving neutron reflection and transmission conditions for a nonactive boundary region with an arbitrary number of arbitrarily thick layers. With this new algorithm, we were able to increase significantly the computational efficiency of our spectral diamond-spectral Green's function method for solving multigroup neutron multiplication eigenvalue problems with multi-layered boundary regions. We provide comparative results for a two-group reactor core model to illustrate the increased efficiency of our spectral method, and we conclude this article with a number of general remarks. (author)

  19. Efficient Minimum-Phase Prefilter Computation Using Fast QL-Factorization

    DEFF Research Database (Denmark)

    Hansen, Morten; Christensen, Lars P.B.

    2009-01-01

    This paper presents a novel approach for computing both the minimum-phase filter and the associated all-pass filter in a computationally efficient way using the fast QL-factorization. A desirable property of this approach is that the complexity is independent on the size of the matrix which is QL...

  20. A Quantitative Approach to the Formal Verification of Real-Time Systems.

    Science.gov (United States)

    1996-09-01

    Computer Science A Quantitative Approach to the Formal Verification of Real - Time Systems Sergio Vale Aguiar Campos September 1996 CMU-CS-96-199...ptisiic raieaiSI v Diambimos Lboiamtad _^ A Quantitative Approach to the Formal Verification of Real - Time Systems Sergio Vale Aguiar Campos...implied, of NSF, the Semiconduc- tor Research Corporation, ARPA or the U.S. government. Keywords: real - time systems , formal verification, symbolic

  1. Energy Efficiency in Computing (1/2)

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    As manufacturers improve the silicon process, truly low energy computing is becoming a reality - both in servers and in the consumer space. This series of lectures covers a broad spectrum of aspects related to energy efficient computing - from circuits to datacentres. We will discuss common trade-offs and basic components, such as processors, memory and accelerators. We will also touch on the fundamentals of modern datacenter design and operation. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP and Google), as well as international research institutes, such as EPFL. Currently, Andrzej acts as a consultant on technology and innovation with TIK Services (http://tik.services), and runs a peer-to-peer lending start-up. NB! All Academic L...

  2. Formal specifications for safety grade systems

    International Nuclear Information System (INIS)

    Chisholm, G.H.; Smith, B.T.; Wojcik, A.S.

    1992-01-01

    The authors describe the findings of a study into the application of formal methods to the specification of a safety system for an operating nuclear reactor. They developed a formal specification that is used to verify and validate that no unsafe condition will result from action or inaction of the system. For this reason, the specification must facilitate thinking about, talking about, and implementing the system. In fact, the specification must provide a bridge between people (designers, engineers, policy makers) and diverse implementations (hardware, software, sensors, power supplies) at all levels. For a specification to serve as an effective linkage, it must have the following properties: (1) completeness, (2) conciseness, (3) unambiguity, and (4) communicativeness. In this paper they describe the development of a specification that has three properties. This development is based on the use of formal methods, i.e., methods that add mathematical rigor to the development, analysis and operation of computer systems and to applications based thereon (Neumann). They demonstrate that a specification derived from a formal basis facilitates development of the design and its subsequent verification

  3. Improved pion pion scattering amplitude from dispersion relation formalism

    International Nuclear Information System (INIS)

    Cavalcante, I.P.; Coutinho, Y.A.; Borges, J. Sa

    2005-01-01

    Pion-pion scattering amplitude is obtained from Chiral Perturbation Theory at one- and two-loop approximations. Dispersion relation formalism provides a more economic method, which was proved to reproduce the analytical structure of that amplitude at both approximation levels. This work extends the use of the formalism in order to compute further unitarity corrections to partial waves, including the D-wave amplitude. (author)

  4. On the Formal Integrability Problem for Planar Differential Systems

    Directory of Open Access Journals (Sweden)

    Antonio Algaba

    2013-01-01

    Full Text Available We study the analytic integrability problem through the formal integrability problem and we show its connection, in some cases, with the existence of invariant analytic (sometimes algebraic curves. From the results obtained, we consider some families of analytic differential systems in ℂ2, and imposing the formal integrability we find resonant centers obviating the computation of some necessary conditions.

  5. A Computationally Efficient Method for Polyphonic Pitch Estimation

    Directory of Open Access Journals (Sweden)

    Ruohua Zhou

    2009-01-01

    Full Text Available This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  6. Positive Wigner functions render classical simulation of quantum computation efficient.

    Science.gov (United States)

    Mari, A; Eisert, J

    2012-12-07

    We show that quantum circuits where the initial state and all the following quantum operations can be represented by positive Wigner functions can be classically efficiently simulated. This is true both for continuous-variable as well as discrete variable systems in odd prime dimensions, two cases which will be treated on entirely the same footing. Noting the fact that Clifford and Gaussian operations preserve the positivity of the Wigner function, our result generalizes the Gottesman-Knill theorem. Our algorithm provides a way of sampling from the output distribution of a computation or a simulation, including the efficient sampling from an approximate output distribution in the case of sampling imperfections for initial states, gates, or measurements. In this sense, this work highlights the role of the positive Wigner function as separating classically efficiently simulable systems from those that are potentially universal for quantum computing and simulation, and it emphasizes the role of negativity of the Wigner function as a computational resource.

  7. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    Energy Technology Data Exchange (ETDEWEB)

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik [School of Electrical and Computer Engineering, Purdue University, West Lafayette, Indiana 47907 (United States)

    2013-12-21

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and “thresholding” operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that “spin-neurons” (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.

  8. Multiparty Computations

    DEFF Research Database (Denmark)

    Dziembowski, Stefan

    here and discuss other problems caused by the adaptiveness. All protocols in the thesis are formally specified and the proofs of their security are given. [1]Ronald Cramer, Ivan Damgård, Stefan Dziembowski, Martin Hirt, and Tal Rabin. Efficient multiparty computations with dishonest minority......In this thesis we study a problem of doing Verifiable Secret Sharing (VSS) and Multiparty Computations in a model where private channels between the players and a broadcast channel is available. The adversary is active, adaptive and has an unbounded computing power. The thesis is based on two...... to a polynomial time black-box reduction, the complexity of adaptively secure VSS is the same as that of ordinary secret sharing (SS), where security is only required against a passive, static adversary. Previously, such a connection was only known for linear secret sharing and VSS schemes. We then show...

  9. Book review of Yoad Winter’s Elements of formal semantics (2016

    Directory of Open Access Journals (Sweden)

    Jessica Rett

    2016-10-01

    Full Text Available Yoad Winter’s (2016 new textbook, 'Elements of formal semantics', is a formally sophisticated introduction to semantic theory. It treats standard beginner topics (e.g. transitivity, quantifiers, relative clauses carefully and efficiently, using a directly compositional lambda calculus.

  10. Discrete computational mechanics for stiff phenomena

    KAUST Repository

    Michels, Dominik L.

    2016-11-28

    Many natural phenomena which occur in the realm of visual computing and computational physics, like the dynamics of cloth, fibers, fluids, and solids as well as collision scenarios are described by stiff Hamiltonian equations of motion, i.e. differential equations whose solution spectra simultaneously contain extremely high and low frequencies. This usually impedes the development of physically accurate and at the same time efficient integration algorithms. We present a straightforward computationally oriented introduction to advanced concepts from classical mechanics. We provide an easy to understand step-by-step introduction from variational principles over the Euler-Lagrange formalism and the Legendre transformation to Hamiltonian mechanics. Based on such solid theoretical foundations, we study the underlying geometric structure of Hamiltonian systems as well as their discrete counterparts in order to develop sophisticated structure preserving integration algorithms to efficiently perform high fidelity simulations.

  11. The thermodynamic efficiency of computations made in cells across the range of life

    Science.gov (United States)

    Kempes, Christopher P.; Wolpert, David; Cohen, Zachary; Pérez-Mercader, Juan

    2017-11-01

    Biological organisms must perform computation as they grow, reproduce and evolve. Moreover, ever since Landauer's bound was proposed, it has been known that all computation has some thermodynamic cost-and that the same computation can be achieved with greater or smaller thermodynamic cost depending on how it is implemented. Accordingly an important issue concerning the evolution of life is assessing the thermodynamic efficiency of the computations performed by organisms. This issue is interesting both from the perspective of how close life has come to maximally efficient computation (presumably under the pressure of natural selection), and from the practical perspective of what efficiencies we might hope that engineered biological computers might achieve, especially in comparison with current computational systems. Here we show that the computational efficiency of translation, defined as free energy expended per amino acid operation, outperforms the best supercomputers by several orders of magnitude, and is only about an order of magnitude worse than the Landauer bound. However, this efficiency depends strongly on the size and architecture of the cell in question. In particular, we show that the useful efficiency of an amino acid operation, defined as the bulk energy per amino acid polymerization, decreases for increasing bacterial size and converges to the polymerization cost of the ribosome. This cost of the largest bacteria does not change in cells as we progress through the major evolutionary shifts to both single- and multicellular eukaryotes. However, the rates of total computation per unit mass are non-monotonic in bacteria with increasing cell size, and also change across different biological architectures, including the shift from unicellular to multicellular eukaryotes. This article is part of the themed issue 'Reconceptualizing the origins of life'.

  12. Formal languages, automata and numeration systems introduction to combinatorics on words

    CERN Document Server

    Rigo, Michel

    2014-01-01

    Formal Languages, Automaton and Numeration Systems presents readers with a review of research related to formal language theory, combinatorics on words or numeration systems, such as Words, DLT (Developments in Language Theory), ICALP, MFCS (Mathematical Foundation of Computer Science), Mons Theoretical Computer Science Days, Numeration, CANT (Combinatorics, Automata and Number Theory). Combinatorics on words deals with problems that can be stated in a non-commutative monoid, such as subword complexity of finite or infinite words, construction and properties of infinite words, unavoidabl

  13. Formal verification of algorithms for critical systems

    Science.gov (United States)

    Rushby, John M.; Von Henke, Friedrich

    1993-01-01

    We describe our experience with formal, machine-checked verification of algorithms for critical applications, concentrating on a Byzantine fault-tolerant algorithm for synchronizing the clocks in the replicated computers of a digital flight control system. First, we explain the problems encountered in unsynchronized systems and the necessity, and criticality, of fault-tolerant synchronization. We give an overview of one such algorithm, and of the arguments for its correctness. Next, we describe a verification of the algorithm that we performed using our EHDM system for formal specification and verification. We indicate the errors we found in the published analysis of the algorithm, and other benefits that we derived from the verification. Based on our experience, we derive some key requirements for a formal specification and verification system adequate to the task of verifying algorithms of the type considered. Finally, we summarize our conclusions regarding the benefits of formal verification in this domain, and the capabilities required of verification systems in order to realize those benefits.

  14. Wireless-Uplinks-Based Energy-Efficient Scheduling in Mobile Cloud Computing

    OpenAIRE

    Xing Liu; Chaowei Yuan; Zhen Yang; Enda Peng

    2015-01-01

    Mobile cloud computing (MCC) combines cloud computing and mobile internet to improve the computational capabilities of resource-constrained mobile devices (MDs). In MCC, mobile users could not only improve the computational capability of MDs but also save operation consumption by offloading the mobile applications to the cloud. However, MCC faces the problem of energy efficiency because of time-varying channels when the offloading is being executed. In this paper, we address the issue of ener...

  15. Recovery Act - CAREER: Sustainable Silicon -- Energy-Efficient VLSI Interconnect for Extreme-Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Patrick [Oregon State Univ., Corvallis, OR (United States)

    2014-01-31

    The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou­ sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on­ chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.

  16. Closed-time path formalism of quantum scattering

    International Nuclear Information System (INIS)

    Manoukian, E.B.

    1988-01-01

    The closed-time path formalism of quantum mechanics, first introduced by Schwinger, is developed starting from a second-quantized formalism by using a functional calculus. An exact functional expression for the closed-time amplitude for a particle state (not just of the vacuum state)is derived from which time-dependent expectation value of observables may be written in closed functional form. In particular, this leads directly to the expression for transition probabilities for scattering theory without computing first the corresponding amplitudes. Finally it is made a comparison with the standard approach

  17. Computer Architecture for Energy Efficient SFQ

    Science.gov (United States)

    2014-08-27

    IBM Corporation (T.J. Watson Research Laboratory) 1101 Kitchawan Road Yorktown Heights, NY 10598 -0000 2 ABSTRACT Number of Papers published in peer...accomplished during this ARO-sponsored project at IBM Research to identify and model an energy efficient SFQ-based computer architecture. The... IBM Windsor Blue (WB), illustrated schematically in Figure 2. The basic building block of WB is a "tile" comprised of a 64-bit arithmetic logic unit

  18. Efficient conjugate gradient algorithms for computation of the manipulator forward dynamics

    Science.gov (United States)

    Fijany, Amir; Scheid, Robert E.

    1989-01-01

    The applicability of conjugate gradient algorithms for computation of the manipulator forward dynamics is investigated. The redundancies in the previously proposed conjugate gradient algorithm are analyzed. A new version is developed which, by avoiding these redundancies, achieves a significantly greater efficiency. A preconditioned conjugate gradient algorithm is also presented. A diagonal matrix whose elements are the diagonal elements of the inertia matrix is proposed as the preconditioner. In order to increase the computational efficiency, an algorithm is developed which exploits the synergism between the computation of the diagonal elements of the inertia matrix and that required by the conjugate gradient algorithm.

  19. Developing an eLearning tool formalizing in YAWL the guidelines used in a transfusion medicine service.

    Science.gov (United States)

    Russo, Paola; Piazza, Miriam; Leonardi, Giorgio; Roncoroni, Layla; Russo, Carlo; Spadaro, Salvatore; Quaglini, Silvana

    2012-01-01

    The blood transfusion is a complex activity subject to a high risk of eventually fatal errors. The development and application of computer-based systems could help reducing the error rate, playing a fundamental role in the improvement of the quality of care. This poster presents an under development eLearning tool formalizing the guidelines of the transfusion process. This system, implemented in YAWL (Yet Another Workflow Language), will be used to train the personnel in order to improve the efficiency of care and to reduce errors.

  20. On the efficient parallel computation of Legendre transforms

    NARCIS (Netherlands)

    Inda, M.A.; Bisseling, R.H.; Maslen, D.K.

    2001-01-01

    In this article, we discuss a parallel implementation of efficient algorithms for computation of Legendre polynomial transforms and other orthogonal polynomial transforms. We develop an approach to the Driscoll-Healy algorithm using polynomial arithmetic and present experimental results on the

  1. On the efficient parallel computation of Legendre transforms

    NARCIS (Netherlands)

    Inda, M.A.; Bisseling, R.H.; Maslen, D.K.

    1999-01-01

    In this article we discuss a parallel implementation of efficient algorithms for computation of Legendre polynomial transforms and other orthogonal polynomial transforms. We develop an approach to the Driscoll-Healy algorithm using polynomial arithmetic and present experimental results on the

  2. Computationally efficient clustering of audio-visual meeting data

    NARCIS (Netherlands)

    Hung, H.; Friedland, G.; Yeo, C.; Shao, L.; Shan, C.; Luo, J.; Etoh, M.

    2010-01-01

    This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors,

  3. Efficient computation method of Jacobian matrix

    International Nuclear Information System (INIS)

    Sasaki, Shinobu

    1995-05-01

    As well known, the elements of the Jacobian matrix are complex trigonometric functions of the joint angles, resulting in a matrix of staggering complexity when we write it all out in one place. This article addresses that difficulties to this subject are overcome by using velocity representation. The main point is that its recursive algorithm and computer algebra technologies allow us to derive analytical formulation with no human intervention. Particularly, it is to be noted that as compared to previous results the elements are extremely simplified throughout the effective use of frame transformations. Furthermore, in case of a spherical wrist, it is shown that the present approach is computationally most efficient. Due to such advantages, the proposed method is useful in studying kinematically peculiar properties such as singularity problems. (author)

  4. Evaluation of light extraction efficiency for the light-emitting diodes based on the transfer matrix formalism and ray-tracing method

    Science.gov (United States)

    Pingbo, An; Li, Wang; Hongxi, Lu; Zhiguo, Yu; Lei, Liu; Xin, Xi; Lixia, Zhao; Junxi, Wang; Jinmin, Li

    2016-06-01

    The internal quantum efficiency (IQE) of the light-emitting diodes can be calculated by the ratio of the external quantum efficiency (EQE) and the light extraction efficiency (LEE). The EQE can be measured experimentally, but the LEE is difficult to calculate due to the complicated LED structures. In this work, a model was established to calculate the LEE by combining the transfer matrix formalism and an in-plane ray tracing method. With the calculated LEE, the IQE was determined and made a good agreement with that obtained by the ABC model and temperature-dependent photoluminescence method. The proposed method makes the determination of the IQE more practical and conventional. Project supported by the National Natural Science Foundation of China (Nos.11574306, 61334009), the China International Science and Technology Cooperation Program (No. 2014DFG62280), and the National High Technology Program of China (No. 2015AA03A101).

  5. Improving robustness and computational efficiency using modern C++

    International Nuclear Information System (INIS)

    Paterno, M; Kowalkowski, J; Green, C

    2014-01-01

    For nearly two decades, the C++ programming language has been the dominant programming language for experimental HEP. The publication of ISO/IEC 14882:2011, the current version of the international standard for the C++ programming language, makes available a variety of language and library facilities for improving the robustness, expressiveness, and computational efficiency of C++ code. However, much of the C++ written by the experimental HEP community does not take advantage of the features of the language to obtain these benefits, either due to lack of familiarity with these features or concern that these features must somehow be computationally inefficient. In this paper, we address some of the features of modern C+-+, and show how they can be used to make programs that are both robust and computationally efficient. We compare and contrast simple yet realistic examples of some common implementation patterns in C, currently-typical C++, and modern C++, and show (when necessary, down to the level of generated assembly language code) the quality of the executable code produced by recent C++ compilers, with the aim of allowing the HEP community to make informed decisions on the costs and benefits of the use of modern C++.

  6. Efficient quantum circuits for one-way quantum computing.

    Science.gov (United States)

    Tanamoto, Tetsufumi; Liu, Yu-Xi; Hu, Xuedong; Nori, Franco

    2009-03-13

    While Ising-type interactions are ideal for implementing controlled phase flip gates in one-way quantum computing, natural interactions between solid-state qubits are most often described by either the XY or the Heisenberg models. We show an efficient way of generating cluster states directly using either the imaginary SWAP (iSWAP) gate for the XY model, or the sqrt[SWAP] gate for the Heisenberg model. Our approach thus makes one-way quantum computing more feasible for solid-state devices.

  7. Type I supergravity effective action from pure spinor formalism

    International Nuclear Information System (INIS)

    Alencar, Geova

    2009-01-01

    Using the pure spinor formalism, we compute the tree-level correlation functions for three strings, one closed and two open, in N = 1 D = 10 superspace. Expanding the superfields in components, the respective terms of the effective action for the type I supergravity are obtained. All terms found agree with the effective action known in the literature. This result gives one more consistency test for the pure spinor formalism.

  8. Energy-efficient computing and networking. Revised selected papers

    Energy Technology Data Exchange (ETDEWEB)

    Hatziargyriou, Nikos; Dimeas, Aris [Ethnikon Metsovion Polytechneion, Athens (Greece); Weidlich, Anke (eds.) [SAP Research Center, Karlsruhe (Germany); Tomtsi, Thomai

    2011-07-01

    This book constitutes the postproceedings of the First International Conference on Energy-Efficient Computing and Networking, E-Energy, held in Passau, Germany in April 2010. The 23 revised papers presented were carefully reviewed and selected for inclusion in the post-proceedings. The papers are organized in topical sections on energy market and algorithms, ICT technology for the energy market, implementation of smart grid and smart home technology, microgrids and energy management, and energy efficiency through distributed energy management and buildings. (orig.)

  9. Wireless-Uplinks-Based Energy-Efficient Scheduling in Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Xing Liu

    2015-01-01

    Full Text Available Mobile cloud computing (MCC combines cloud computing and mobile internet to improve the computational capabilities of resource-constrained mobile devices (MDs. In MCC, mobile users could not only improve the computational capability of MDs but also save operation consumption by offloading the mobile applications to the cloud. However, MCC faces the problem of energy efficiency because of time-varying channels when the offloading is being executed. In this paper, we address the issue of energy-efficient scheduling for wireless uplink in MCC. By introducing Lyapunov optimization, we first propose a scheduling algorithm that can dynamically choose channel to transmit data based on queue backlog and channel statistics. Then, we show that the proposed scheduling algorithm can make a tradeoff between queue backlog and energy consumption in a channel-aware MCC system. Simulation results show that the proposed scheduling algorithm can reduce the time average energy consumption for offloading compared to the existing algorithm.

  10. The allocative efficiency of the formal versus the informal financial sector

    NARCIS (Netherlands)

    Lensink, B.W.

    An important reason for the disappointing effects of the financial reform programmes, as part of the structural adjustment programmes in sub-Saharan Africa, stems from the strong focus of the adjustment programmes on the formal banking sector. A financial reform programme which does not explicitly

  11. State or nature? Endogenous formal versus informal sanctions in the voluntary provision of public goods

    DEFF Research Database (Denmark)

    Kamei, Kenju; Putterman, Louis; Tyran, Jean-Robert Karl

    2015-01-01

    We investigate the endogenous formation of sanctioning institutions supposed to improve efficiency in the voluntary provision of public goods. Our paper parallels Markussen et al. (Rev Econ Stud 81:301–324, 2014) in that our experimental subjects vote over formal versus informal sanctions......, but it goes beyond that paper by endogenizing the formal sanction scheme. We find that self-determined formal sanctions schemes are popular and efficient when they carry no up-front cost, but as in Markussen et al. informal sanctions are more popular and efficient than formal sanctions when adopting...... the latter entails such a cost. Practice improves the performance of sanction schemes: they become more targeted and deterrent with learning. Voters’ characteristics, including their tendency to engage in perverse informal sanctioning, help to predict individual voting....

  12. $\\delta N$ formalism from superpotential and holography

    CERN Document Server

    Garriga, Jaume; Vernizzi, Filippo

    2016-02-16

    We consider the superpotential formalism to describe the evolution of scalar fields during inflation, generalizing it to include the case with non-canonical kinetic terms. We provide a characterization of the attractor behaviour of the background evolution in terms of first and second slow-roll parameters (which need not be small). We find that the superpotential is useful in justifying the separate universe approximation from the gradient expansion, and also in computing the spectra of primordial perturbations around attractor solutions in the $\\delta N$ formalism. As an application, we consider a class of models where the background trajectories for the inflaton fields are derived from a product separable superpotential. In the perspective of the holographic inflation scenario, such models are dual to a deformed CFT boundary theory, with $D$ mutually uncorrelated deformation operators. We compute the bulk power spectra of primordial adiabatic and entropy cosmological perturbations, and show that the results...

  13. Perspective: Memcomputing: Leveraging memory and physics to compute efficiently

    Science.gov (United States)

    Di Ventra, Massimiliano; Traversa, Fabio L.

    2018-05-01

    It is well known that physical phenomena may be of great help in computing some difficult problems efficiently. A typical example is prime factorization that may be solved in polynomial time by exploiting quantum entanglement on a quantum computer. There are, however, other types of (non-quantum) physical properties that one may leverage to compute efficiently a wide range of hard problems. In this perspective, we discuss how to employ one such property, memory (time non-locality), in a novel physics-based approach to computation: Memcomputing. In particular, we focus on digital memcomputing machines (DMMs) that are scalable. DMMs can be realized with non-linear dynamical systems with memory. The latter property allows the realization of a new type of Boolean logic, one that is self-organizing. Self-organizing logic gates are "terminal-agnostic," namely, they do not distinguish between the input and output terminals. When appropriately assembled to represent a given combinatorial/optimization problem, the corresponding self-organizing circuit converges to the equilibrium points that express the solutions of the problem at hand. In doing so, DMMs take advantage of the long-range order that develops during the transient dynamics. This collective dynamical behavior, reminiscent of a phase transition, or even the "edge of chaos," is mediated by families of classical trajectories (instantons) that connect critical points of increasing stability in the system's phase space. The topological character of the solution search renders DMMs robust against noise and structural disorder. Since DMMs are non-quantum systems described by ordinary differential equations, not only can they be built in hardware with the available technology, they can also be simulated efficiently on modern classical computers. As an example, we will show the polynomial-time solution of the subset-sum problem for the worst cases, and point to other types of hard problems where simulations of DMMs

  14. Semiclassical scalar propagators in curved backgrounds: Formalism and ambiguities

    International Nuclear Information System (INIS)

    Grain, J.; Barrau, A.

    2007-01-01

    The phenomenology of quantum systems in curved space-times is among the most fascinating fields of physics, allowing--often at the gedankenexperiment level--constraints on tentative theories of quantum gravity. Determining the dynamics of fields in curved backgrounds remains, however, a complicated task because of the highly intricate partial differential equations involved, especially when the space metric exhibits no symmetry. In this article, we provide--in a pedagogical way--a general formalism to determine this dynamics at the semiclassical order. To this purpose, a generic expression for the semiclassical propagator is computed and the equation of motion for the probability four-current is derived. Those results underline a direct analogy between the computation of the propagator in general relativistic quantum mechanics and the computation of the propagator for stationary systems in nonrelativistic quantum mechanics. A possible application of this formalism to curvature-induced quantum interferences is also discussed

  15. The formal path integral and quantum mechanics

    International Nuclear Information System (INIS)

    Johnson-Freyd, Theo

    2010-01-01

    Given an arbitrary Lagrangian function on R d and a choice of classical path, one can try to define Feynman's path integral supported near the classical path as a formal power series parameterized by 'Feynman diagrams', although these diagrams may diverge. We compute this expansion and show that it is (formally, if there are ultraviolet divergences) invariant under volume-preserving changes of coordinates. We prove that if the ultraviolet divergences cancel at each order, then our formal path integral satisfies a 'Fubini theorem' expressing the standard composition law for the time evolution operator in quantum mechanics. Moreover, we show that when the Lagrangian is inhomogeneous quadratic in velocity such that its homogeneous-quadratic part is given by a matrix with constant determinant, then the divergences cancel at each order. Thus, by 'cutting and pasting' and choosing volume-compatible local coordinates, our construction defines a Feynman-diagrammatic 'formal path integral' for the nonrelativistic quantum mechanics of a charged particle moving in a Riemannian manifold with an external electromagnetic field.

  16. A formalization of the flutter shutter

    Science.gov (United States)

    Tendero, Yohann; Rougé, Bernard; Morel, Jean-Michel

    2012-09-01

    Acquiring good quality images of moving objects by a digital camera remains a valid question. If the velocity of the photographed object is not known, it is virtually impossible to tune an optimal exposure time. For this reason the recent Agrawal et al. flutter shutter apparatus has generated much interest. In this communication, we propose a mathematical formalization of a general flutter shutter method, also permitting non-binary shutter sequences. Thanks to this formalization, the question of the optimal flutter shutter code can be defined and solved. The method gives analytic formulas for the best attainable SNR for the restored image. It also gives a way to compute optimal flutter shutter codes.

  17. Public goods and voting on formal sanction schemes

    DEFF Research Database (Denmark)

    Putterman, Louis; Tyran, Jean-Robert Karl; Kamei, Kenju

    2011-01-01

    The burgeoning literature on the use of sanctions to support the provision of public goods has largely neglected the use of formal or centralized sanctions. We let subjects playing a linear public goods game vote on the parameters of a formal sanction scheme capable of either resolving...... or exacerbating the free-rider problem, depending on parameter settings. Most groups quickly learned to choose parameters inducing efficient outcomes. We find that cooperative orientation, political attitude, gender and intelligence have a small but sometimes significant influence on voting....

  18. Public Goods and Voting on Formal Sanction Schemes

    DEFF Research Database (Denmark)

    Putterman, Louis; Tyran, Jean-Robert; Kamei, Kenju

    The burgeoning literature on the use of sanctions to support public goods provision has largely neglected the use of formal or centralized sanctions. We let subjects playing a linear public goods game vote on the parameters of a formal sanction scheme capable both of resolving and of exacerbating...... the free-rider problem, depending on parameter settings. Most groups quickly learned to choose parameters inducing efficient outcomes. But despite uniform money payoffs implying common interest in those parameters, voting patterns suggest significant influence of cooperative orientation, political...

  19. Application of Formal Methods in Software Engineering

    Directory of Open Access Journals (Sweden)

    Adriana Morales

    2011-12-01

    Full Text Available The purpose of this research work is to examine: (1 why are necessary the formal methods for software systems today, (2 high integrity systems through the methodology C-by-C –Correctness-by-Construction–, and (3 an affordable methodology to apply formal methods in software engineering. The research process included reviews of the literature through Internet, in publications and presentations in events. Among the Research results found that: (1 there is increasing the dependence that the nations have, the companies and people of software systems, (2 there is growing demand for software Engineering to increase social trust in the software systems, (3 exist methodologies, as C-by-C, that can provide that level of trust, (4 Formal Methods constitute a principle of computer science that can be applied software engineering to perform reliable process in software development, (5 software users have the responsibility to demand reliable software products, and (6 software engineers have the responsibility to develop reliable software products. Furthermore, it is concluded that: (1 it takes more research to identify and analyze other methodologies and tools that provide process to apply the Formal Software Engineering methods, (2 Formal Methods provide an unprecedented ability to increase the trust in the exactitude of the software products and (3 by development of new methodologies and tools is being achieved costs are not more a disadvantage for application of formal methods.

  20. Formal Verification of Annotated Textual Use-Cases

    Czech Academy of Sciences Publication Activity Database

    Šimko, V.; Hauzar, D.; Hnětynka, P.; Bureš, Tomáš; Plášil, F.

    2015-01-01

    Roč. 58, č. 7 (2015), s. 1495-1529 ISSN 0010-4620 Grant - others:GA AV ČR(CZ) GAP103/11/1489 Institutional support: RVO:67985807 Keywords : specification * use-cases * behavior modeling * verification * temporal logic * formalization Subject RIV: JC - Computer Hardware ; Software Impact factor: 1.000, year: 2015

  1. Revisiting the formal foundation of Probabilistic Databases

    NARCIS (Netherlands)

    Wanders, B.; van Keulen, Maurice

    2015-01-01

    One of the core problems in soft computing is dealing with uncertainty in data. In this paper, we revisit the formal foundation of a class of probabilistic databases with the purpose to (1) obtain data model independence, (2) separate metadata on uncertainty and probabilities from the raw data, (3)

  2. Discrete Wigner functions and quantum computation

    International Nuclear Information System (INIS)

    Galvao, E.

    2005-01-01

    Full text: Gibbons et al. have recently defined a class of discrete Wigner functions W to represent quantum states in a finite Hilbert space dimension d. I characterize the set C d of states having non-negative W simultaneously in all definitions of W in this class. I then argue that states in this set behave classically in a well-defined computational sense. I show that one-qubit states in C 2 do not provide for universal computation in a recent model proposed by Bravyi and Kitaev [quant-ph/0403025]. More generally, I show that the only pure states in C d are stabilizer states, which have an efficient description using the stabilizer formalism. This result shows that two different notions of 'classical' states coincide: states with non-negative Wigner functions are those which have an efficient description. This suggests that negativity of W may be necessary for exponential speed-up in pure-state quantum computation. (author)

  3. Agent-based analysis of organizations : formalization and simulation

    NARCIS (Netherlands)

    Dignum, M.V.; Tick, C.

    2007-01-01

    Organizational effectiveness depends on many factors, including individual excellence, efficient structures, effective planning and capability to understand and match context requirements. We propose a way to model organizational performance based on a combination of formal models and

  4. Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications

    Science.gov (United States)

    Sun, Xian-He

    1997-01-01

    Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm

  5. Formal characterizations of FA-based string processors

    CSIR Research Space (South Africa)

    Ngassam, EK

    2010-08-01

    Full Text Available stream_source_info Ngassam_2010.pdf.txt stream_content_type text/plain stream_size 7434 Content-Encoding UTF-8 stream_name Ngassam_2010.pdf.txt Content-Type text/plain; charset=UTF-8 Formal Characterizations of FA...-based String Processors Ernest Ketcha Ngassam1,2,?, Bruce W. Watson3, and Derrick G. Kourie3 1SAP Meraka UTD, Pretoria, South Africa 2School of Computing University of South Africa Pretoria 0001 ernest.ngassam@sap.com 3Department of Computer Science...

  6. Orality and literacy, formality and informality in email communication

    Directory of Open Access Journals (Sweden)

    Carmen Pérez Sabater

    2008-04-01

    Full Text Available Approaches to the linguistic characteristics of computer-mediated communication (CMC have highlighted the frequent oral traits involved in electronic mail along with features of written language. But email is today a new communication exchange medium in social, professional and academic settings, frequently used as a substitute for the traditional formal letter. The oral characterizations and linguistic formality involved in this use of emails are still in need of research. This paper explores the formal and informal features in emails based on a corpus of messages exchanged by academic institutions, and studies the similarities and differences on the basis of their mode of communication (one-to-one or one-to-many and the sender’s mother tongue (native or nonnative. The language samples collected were systematically analyzed for formality of greetings and farewells, use of contractions, politeness indicators and non-standard linguistic features. The findings provide new insights into traits of orality and formality in email communication and demonstrate the emergence of a new style in writing for even the most important, confidential and formal purposes which seems to be forming a new sub-genre of letter-writing.

  7. Time-dependent configurations in the perturbative formalism of string theory

    International Nuclear Information System (INIS)

    Durin, B.

    2006-01-01

    In this thesis three time-dependent configurations are studied in the formalism of first-quantized string. These configurations are interesting because perturbative computation of correlation functions is possible and thus is a tool to understand the interplay between the time-dependent geometry and the quantified string. In a first chapter, we explain the reasons for studying these configurations. Then in the second chapter we describe the perturbative formalism and explain how to solve technical problem we encountered. The third chapter is devoted to the physical description of the phenomena involved in these configurations, to the specific computations we made and to the insights we gained. Eventually, we conclude and give some perspectives. (author)

  8. Dimensioning storage and computing clusters for efficient High Throughput Computing

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Scientific experiments are producing huge amounts of data, and they continue increasing the size of their datasets and the total volume of data. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of Scientific Data Centres has shifted from coping efficiently with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful s...

  9. Dimensioning storage and computing clusters for efficient high throughput computing

    International Nuclear Information System (INIS)

    Accion, E; Bria, A; Bernabeu, G; Caubet, M; Delfino, M; Espinal, X; Merino, G; Lopez, F; Martinez, F; Planas, E

    2012-01-01

    Scientific experiments are producing huge amounts of data, and the size of their datasets and total volume of data continues increasing. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of scientific data centers has shifted from efficiently coping with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful data storage and processing service in an intensive HTC environment.

  10. Efficient Backprojection-Based Synthetic Aperture Radar Computation with Many-Core Processors

    Directory of Open Access Journals (Sweden)

    Jongsoo Park

    2013-01-01

    Full Text Available Tackling computationally challenging problems with high efficiency often requires the combination of algorithmic innovation, advanced architecture, and thorough exploitation of parallelism. We demonstrate this synergy through synthetic aperture radar (SAR via backprojection, an image reconstruction method that can require hundreds of TFLOPS. Computation cost is significantly reduced by our new algorithm of approximate strength reduction; data movement cost is economized by software locality optimizations facilitated by advanced architecture support; parallelism is fully harnessed in various patterns and granularities. We deliver over 35 billion backprojections per second throughput per compute node on an Intel® Xeon® processor E5-2670-based cluster, equipped with Intel® Xeon Phi™ coprocessors. This corresponds to processing a 3K×3K image within a second using a single node. Our study can be extended to other settings: backprojection is applicable elsewhere including medical imaging, approximate strength reduction is a general code transformation technique, and many-core processors are emerging as a solution to energy-efficient computing.

  11. Formal modeling and verification of systems with self-x properties

    OpenAIRE

    Reif, Wolfgang

    2006-01-01

    Formal modeling and verification of systems with self-x properties / Matthias Güdemann, Frank Ortmeier and Wolfgang Reif. - In: Autonomic and trusted computing : third international conference, ATC 2006, Wuhan, China, September 3-6, 2006 ; proceedings / Laurence T. Yang ... (eds.). - Berlin [u.a.] : Springer, 2006. - S. 38-47. - (Lecture notes in computer science ; 4158)

  12. The Efficient Use of Vector Computers with Emphasis on Computational Fluid Dynamics : a GAMM-Workshop

    CERN Document Server

    Gentzsch, Wolfgang

    1986-01-01

    The GAMM Committee for Numerical Methods in Fluid Mechanics organizes workshops which should bring together experts of a narrow field of computational fluid dynamics (CFD) to exchange ideas and experiences in order to speed-up the development in this field. In this sense it was suggested that a workshop should treat the solution of CFD problems on vector computers. Thus we organized a workshop with the title "The efficient use of vector computers with emphasis on computational fluid dynamics". The workshop took place at the Computing Centre of the University of Karlsruhe, March 13-15,1985. The participation had been restricted to 22 people of 7 countries. 18 papers have been presented. In the announcement of the workshop we wrote: "Fluid mechanics has actively stimulated the development of superfast vector computers like the CRAY's or CYBER 205. Now these computers on their turn stimulate the development of new algorithms which result in a high degree of vectorization (sca1ar/vectorized execution-time). But w...

  13. Uncovering the triple omeron vertex from Wilson line formalism

    International Nuclear Information System (INIS)

    Chirilli, G. A.; Szymanowski, L.; Wallon, S.

    2011-01-01

    We compute the triple omeron vertex from the Wilson line formalism, including both planar and nonplanar contributions, and get perfect agreement with the result obtained in the Extended Generalized Logarithmic Approximation based on Reggeon calculus.

  14. Formal matrices

    CERN Document Server

    Krylov, Piotr

    2017-01-01

    This monograph is a comprehensive account of formal matrices, examining homological properties of modules over formal matrix rings and summarising the interplay between Morita contexts and K theory. While various special types of formal matrix rings have been studied for a long time from several points of view and appear in various textbooks, for instance to examine equivalences of module categories and to illustrate rings with one-sided non-symmetric properties, this particular class of rings has, so far, not been treated systematically. Exploring formal matrix rings of order 2 and introducing the notion of the determinant of a formal matrix over a commutative ring, this monograph further covers the Grothendieck and Whitehead groups of rings. Graduate students and researchers interested in ring theory, module theory and operator algebras will find this book particularly valuable. Containing numerous examples, Formal Matrices is a largely self-contained and accessible introduction to the topic, assuming a sol...

  15. Senior Surfing: Computer Use, Aging, and Formal Training

    Science.gov (United States)

    Warren-Peace, Paula; Parrish, Elaine; Peace, C. Brian; Xu, Jianzhong

    2008-01-01

    In this article, we describe data from two case studies of seniors (one younger senior and one older senior) in learning to use computers. The study combined interviews, observations, and documents to take a close look at their experiences with computers, as well as the influences of aging and computer training on their experiences. The study…

  16. Unified commutation-pruning technique for efficient computation of composite DFTs

    Science.gov (United States)

    Castro-Palazuelos, David E.; Medina-Melendrez, Modesto Gpe.; Torres-Roman, Deni L.; Shkvarko, Yuriy V.

    2015-12-01

    An efficient computation of a composite length discrete Fourier transform (DFT), as well as a fast Fourier transform (FFT) of both time and space data sequences in uncertain (non-sparse or sparse) computational scenarios, requires specific processing algorithms. Traditional algorithms typically employ some pruning methods without any commutations, which prevents them from attaining the potential computational efficiency. In this paper, we propose an alternative unified approach with automatic commutations between three computational modalities aimed at efficient computations of the pruned DFTs adapted for variable composite lengths of the non-sparse input-output data. The first modality is an implementation of the direct computation of a composite length DFT, the second one employs the second-order recursive filtering method, and the third one performs the new pruned decomposed transform. The pruned decomposed transform algorithm performs the decimation in time or space (DIT) data acquisition domain and, then, decimation in frequency (DIF). The unified combination of these three algorithms is addressed as the DFTCOMM technique. Based on the treatment of the combinational-type hypotheses testing optimization problem of preferable allocations between all feasible commuting-pruning modalities, we have found the global optimal solution to the pruning problem that always requires a fewer or, at most, the same number of arithmetic operations than other feasible modalities. The DFTCOMM method outperforms the existing competing pruning techniques in the sense of attainable savings in the number of required arithmetic operations. It requires fewer or at most the same number of arithmetic operations for its execution than any other of the competing pruning methods reported in the literature. Finally, we provide the comparison of the DFTCOMM with the recently developed sparse fast Fourier transform (SFFT) algorithmic family. We feature that, in the sensing scenarios with

  17. Consistency relation and inflaton field redefinition in the δN formalism

    Energy Technology Data Exchange (ETDEWEB)

    Domènech, Guillem [Center for Gravitational Physics, Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502 (Japan); Gong, Jinn-Ouk, E-mail: jinn-ouk.gong@apctp.org [Asia Pacific Center for Theoretical Physics, Pohang 37673 (Korea, Republic of); Department of Physics, Postech, Pohang 37673 (Korea, Republic of); Sasaki, Misao [Center for Gravitational Physics, Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502 (Japan)

    2017-06-10

    We compute for general single-field inflation the intrinsic non-Gaussianity due to the self-interactions of the inflaton field in the squeezed limit. We recover the consistency relation in the context of the δN formalism, and argue that there is a particular field redefinition that makes the intrinsic non-Gaussianity vanishing, thus improving the estimate of the local non-Gaussianity using the δN formalism.

  18. Contrasting lexical similarity and formal definitions in SNOMED CT: consistency and implications.

    Science.gov (United States)

    Agrawal, Ankur; Elhanan, Gai

    2014-02-01

    To quantify the presence of and evaluate an approach for detection of inconsistencies in the formal definitions of SNOMED CT (SCT) concepts utilizing a lexical method. Utilizing SCT's Procedure hierarchy, we algorithmically formulated similarity sets: groups of concepts with similar lexical structure of their fully specified name. We formulated five random samples, each with 50 similarity sets, based on the same parameter: number of parents, attributes, groups, all the former as well as a randomly selected control sample. All samples' sets were reviewed for types of formal definition inconsistencies: hierarchical, attribute assignment, attribute target values, groups, and definitional. For the Procedure hierarchy, 2111 similarity sets were formulated, covering 18.1% of eligible concepts. The evaluation revealed that 38 (Control) to 70% (Different relationships) of similarity sets within the samples exhibited significant inconsistencies. The rate of inconsistencies for the sample with different relationships was highly significant compared to Control, as well as the number of attribute assignment and hierarchical inconsistencies within their respective samples. While, at this time of the HITECH initiative, the formal definitions of SCT are only a minor consideration, in the grand scheme of sophisticated, meaningful use of captured clinical data, they are essential. However, significant portion of the concepts in the most semantically complex hierarchy of SCT, the Procedure hierarchy, are modeled inconsistently in a manner that affects their computability. Lexical methods can efficiently identify such inconsistencies and possibly allow for their algorithmic resolution. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. A Formal Definition of VDM-SL

    DEFF Research Database (Denmark)

    Bruun, Hans; Damm, F.; Dawes, J.

    1998-01-01

    This joint report from the Danish Institute for Applied Computer Science (IFAD), the Technical Universities of Delft and Denmark and the University of Leicester contains the background and technical material used in the production of the ISO Standard that defines the specification language part...... and reviewers of the project - these changes have improved the style and technical correctness of the formal definitions used to define VDM-SL....

  20. Proceedings of the First NASA Formal Methods Symposium

    Science.gov (United States)

    Denney, Ewen (Editor); Giannakopoulou, Dimitra (Editor); Pasareanu, Corina S. (Editor)

    2009-01-01

    Topics covered include: Model Checking - My 27-Year Quest to Overcome the State Explosion Problem; Applying Formal Methods to NASA Projects: Transition from Research to Practice; TLA+: Whence, Wherefore, and Whither; Formal Methods Applications in Air Transportation; Theorem Proving in Intel Hardware Design; Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering; Model Checking for Autonomic Systems Specified with ASSL; A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process; Software Model Checking Without Source Code; Generalized Abstract Symbolic Summaries; A Comparative Study of Randomized Constraint Solvers for Random-Symbolic Testing; Component-Oriented Behavior Extraction for Autonomic System Design; Automated Verification of Design Patterns with LePUS3; A Module Language for Typing by Contracts; From Goal-Oriented Requirements to Event-B Specifications; Introduction of Virtualization Technology to Multi-Process Model Checking; Comparing Techniques for Certified Static Analysis; Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder; jFuzz: A Concolic Whitebox Fuzzer for Java; Machine-Checkable Timed CSP; Stochastic Formal Correctness of Numerical Algorithms; Deductive Verification of Cryptographic Software; Coloured Petri Net Refinement Specification and Correctness Proof with Coq; Modeling Guidelines for Code Generation in the Railway Signaling Context; Tactical Synthesis Of Efficient Global Search Algorithms; Towards Co-Engineering Communicating Autonomous Cyber-Physical Systems; and Formal Methods for Automated Diagnosis of Autosub 6000.

  1. Efficient O(N) recursive computation of the operational space inertial matrix

    International Nuclear Information System (INIS)

    Lilly, K.W.; Orin, D.E.

    1993-01-01

    The operational space inertia matrix Λ reflects the dynamic properties of a robot manipulator to its tip. In the control domain, it may be used to decouple force and/or motion control about the manipulator workspace axes. The matrix Λ also plays an important role in the development of efficient algorithms for the dynamic simulation of closed-chain robotic mechanisms, including simple closed-chain mechanisms such as multiple manipulator systems and walking machines. The traditional approach used to compute Λ has a computational complexity of O(N 3 ) for an N degree-of-freedom manipulator. This paper presents the development of a recursive algorithm for computing the operational space inertia matrix (OSIM) that reduces the computational complexity to O(N). This algorithm, the inertia propagation method, is based on a single recursion that begins at the base of the manipulator and progresses out to the last link. Also applicable to redundant systems and mechanisms with multiple-degree-of-freedom joints, the inertia propagation method is the most efficient method known for computing Λ for N ≥ 6. The numerical accuracy of the algorithm is discussed for a PUMA 560 robot with a fixed base

  2. Some computational challenges of developing efficient parallel algorithms for data-dependent computations in thermal-hydraulics supercomputer applications

    International Nuclear Information System (INIS)

    Woodruff, S.B.

    1994-01-01

    The Transient Reactor Analysis Code (TRAC), which features a two-fluid treatment of thermal-hydraulics, is designed to model transients in water reactors and related facilities. One of the major computational costs associated with TRAC and similar codes is calculating constitutive coefficients. Although the formulations for these coefficients are local, the costs are flow-regime- or data-dependent; i.e., the computations needed for a given spatial node often vary widely as a function of time. Consequently, a fixed, uniform assignment of nodes to prallel processors will result in degraded computational efficiency due to the poor load balancing. A standard method for treating data-dependent models on vector architectures has been to use gather operations (or indirect adressing) to sort the nodes into subsets that (temporarily) share a common computational model. However, this method is not effective on distributed memory data parallel architectures, where indirect adressing involves expensive communication overhead. Another serious problem with this method involves software engineering challenges in the areas of maintainability and extensibility. For example, an implementation that was hand-tuned to achieve good computational efficiency would have to be rewritten whenever the decision tree governing the sorting was modified. Using an example based on the calculation of the wall-to-liquid and wall-to-vapor heat-transfer coefficients for three nonboiling flow regimes, we describe how the use of the Fortran 90 WHERE construct and automatic inlining of functions can be used to ameliorate this problem while improving both efficiency and software engineering. Unfortunately, a general automatic solution to the load-balancing problem associated with data-dependent computations is not yet available for massively parallel architectures. We discuss why developers should either wait for such solutions or consider alternative numerical algorithms, such as a neural network

  3. A computationally efficient OMP-based compressed sensing reconstruction for dynamic MRI

    International Nuclear Information System (INIS)

    Usman, M; Prieto, C; Schaeffter, T; Batchelor, P G; Odille, F; Atkinson, D

    2011-01-01

    Compressed sensing (CS) methods in MRI are computationally intensive. Thus, designing novel CS algorithms that can perform faster reconstructions is crucial for everyday applications. We propose a computationally efficient orthogonal matching pursuit (OMP)-based reconstruction, specifically suited to cardiac MR data. According to the energy distribution of a y-f space obtained from a sliding window reconstruction, we label the y-f space as static or dynamic. For static y-f space images, a computationally efficient masked OMP reconstruction is performed, whereas for dynamic y-f space images, standard OMP reconstruction is used. The proposed method was tested on a dynamic numerical phantom and two cardiac MR datasets. Depending on the field of view composition of the imaging data, compared to the standard OMP method, reconstruction speedup factors ranging from 1.5 to 2.5 are achieved. (note)

  4. Efficient Processing of Continuous Skyline Query over Smarter Traffic Data Stream for Cloud Computing

    Directory of Open Access Journals (Sweden)

    Wang Hanning

    2013-01-01

    Full Text Available The analyzing and processing of multisource real-time transportation data stream lay a foundation for the smart transportation's sensibility, interconnection, integration, and real-time decision making. Strong computing ability and valid mass data management mode provided by the cloud computing, is feasible for handling Skyline continuous query in the mass distributed uncertain transportation data stream. In this paper, we gave architecture of layered smart transportation about data processing, and we formalized the description about continuous query over smart transportation data Skyline. Besides, we proposed mMR-SUDS algorithm (Skyline query algorithm of uncertain transportation stream data based on micro-batchinMap Reduce based on sliding window division and architecture.

  5. On the Computation of the Efficient Frontier of the Portfolio Selection Problem

    Directory of Open Access Journals (Sweden)

    Clara Calvo

    2012-01-01

    Full Text Available An easy-to-use procedure is presented for improving the ε-constraint method for computing the efficient frontier of the portfolio selection problem endowed with additional cardinality and semicontinuous variable constraints. The proposed method provides not only a numerical plotting of the frontier but also an analytical description of it, including the explicit equations of the arcs of parabola it comprises and the change points between them. This information is useful for performing a sensitivity analysis as well as for providing additional criteria to the investor in order to select an efficient portfolio. Computational results are provided to test the efficiency of the algorithm and to illustrate its applications. The procedure has been implemented in Mathematica.

  6. A Formalization of Kant's Second Formulation of the Categorical Imperative

    OpenAIRE

    Lindner, Felix; Bentzen, Martin Mose

    2018-01-01

    We present a formalization and computational implementation of the second formulation of Kant's categorical imperative. This ethical principle requires an agent to never treat someone merely as a means but always also as an end. Here we interpret this principle in terms of how persons are causally affected by actions. We introduce Kantian causal agency models in which moral patients, actions, goals, and causal influence are represented, and we show how to formalize several readings of Kant's ...

  7. Formal Solutions for Polarized Radiative Transfer. II. High-order Methods

    Energy Technology Data Exchange (ETDEWEB)

    Janett, Gioele; Steiner, Oskar; Belluzzi, Luca, E-mail: gioele.janett@irsol.ch [Istituto Ricerche Solari Locarno (IRSOL), 6605 Locarno-Monti (Switzerland)

    2017-08-20

    When integrating the radiative transfer equation for polarized light, the necessity of high-order numerical methods is well known. In fact, well-performing high-order formal solvers enable higher accuracy and the use of coarser spatial grids. Aiming to provide a clear comparison between formal solvers, this work presents different high-order numerical schemes and applies the systematic analysis proposed by Janett et al., emphasizing their advantages and drawbacks in terms of order of accuracy, stability, and computational cost.

  8. On the Application of Formal Methods to Clinical Guidelines, an Artificial Intelligence Perspective

    NARCIS (Netherlands)

    Hommersom, A.J.

    2008-01-01

    In computer science, all kinds of methods and techniques have been developed to study systems, such as simulation of the behaviour of a system. Furthermore, it is possible to study these systems by proving formal formal properties or by searching through all the possible states that a system may be

  9. Theory of automata, formal languages and computation

    CERN Document Server

    Xavier, SPE

    2004-01-01

    This book is aimed at providing an introduction to the basic models of computability to the undergraduate students. This book is devoted to Finite Automata and their properties. Pushdown Automata provides a class of models and enables the analysis of context-free languages. Turing Machines have been introduced and the book discusses computability and decidability. A number of problems with solutions have been provided for each chapter. A lot of exercises have been given with hints/answers to most of these tutorial problems.

  10. Spinor formalism and complex-vector formalism of general relativity

    International Nuclear Information System (INIS)

    Han-ying, G.; Yong-shi, W.; Gendao, L.

    1974-01-01

    In this paper, using E. Cartan's exterior calculus, we give the spinor form of the structure equations, which leads naturally to the Newman--Penrose equations. Furthermore, starting from the spinor spaces and the el (2C) algebra, we construct the general complex-vector formalism of general relativity. We find that both the Cahen--Debever--Defrise complex-vector formalism and that of Brans are its special cases. Thus, the spinor formalism and the complex-vector formalism of general relativity are unified on the basis of the uni-modular group SL(2C) and its Lie algebra

  11. An efficient algorithm for nucleolus and prekernel computation in some classes of TU-games

    NARCIS (Netherlands)

    Faigle, U.; Kern, Walter; Kuipers, J.

    1998-01-01

    We consider classes of TU-games. We show that we can efficiently compute an allocation in the intersection of the prekernel and the least core of the game if we can efficiently compute the minimum excess for any given allocation. In the case where the prekernel of the game contains exactly one core

  12. Ontology Assisted Formal Specification Extraction from Text

    Directory of Open Access Journals (Sweden)

    Andreea Mihis

    2010-12-01

    Full Text Available In the field of knowledge processing, the ontologies are the most important mean. They make possible for the computer to understand better the natural language and to make judgments. In this paper, a method which use ontologies in the semi-automatic extraction of formal specifications from a natural language text is proposed.

  13. Octopus: embracing the energy efficiency of handheld multimedia computers

    NARCIS (Netherlands)

    Havinga, Paul J.M.; Smit, Gerardus Johannes Maria

    1999-01-01

    In the MOBY DICK project we develop and define the architecture of a new generation of mobile hand-held computers called Mobile Digital Companions. The Companions must meet several major requirements: high performance, energy efficient, a notion of Quality of Service (QoS), small size, and low

  14. Towards the Automatic Detection of Efficient Computing Assets in a Heterogeneous Cloud Environment

    OpenAIRE

    Iglesias, Jesus Omana; Stokes, Nicola; Ventresque, Anthony; Murphy, Liam, B.E.; Thorburn, James

    2013-01-01

    peer-reviewed In a heterogeneous cloud environment, the manual grading of computing assets is the first step in the process of configuring IT infrastructures to ensure optimal utilization of resources. Grading the efficiency of computing assets is however, a difficult, subjective and time consuming manual task. Thus, an automatic efficiency grading algorithm is highly desirable. In this paper, we compare the effectiveness of the different criteria used in the manual gr...

  15. Formal Specification and Analysis of Cloud Computing Management

    Science.gov (United States)

    2012-01-24

    te r Cloud Computing in a Nutshell We begin this introduction to Cloud Computing with a famous quote by Larry Ellison: “The interesting thing about...the wording of some of our ads.” — Larry Ellison, Oracle CEO [106] In view of this statement, we summarize the essential aspects of Cloud Computing...1] M. Abadi, M. Burrows , M. Manasse, and T. Wobber. Moderately hard, memory-bound functions. ACM Transactions on Internet Technology, 5(2):299–327

  16. Structured Parallel Programming Patterns for Efficient Computation

    CERN Document Server

    McCool, Michael; Robison, Arch

    2012-01-01

    Programming is now parallel programming. Much as structured programming revolutionized traditional serial programming decades ago, a new kind of structured programming, based on patterns, is relevant to parallel programming today. Parallel computing experts and industry insiders Michael McCool, Arch Robison, and James Reinders describe how to design and implement maintainable and efficient parallel algorithms using a pattern-based approach. They present both theory and practice, and give detailed concrete examples using multiple programming models. Examples are primarily given using two of th

  17. Energy Efficiency in Computing (2/2)

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    We will start the second day of our energy efficient computing series with a brief discussion of software and the impact it has on energy consumption. A second major point of this lecture will be the current state of research and a few future technologies, ranging from mainstream (e.g. the Internet of Things) to exotic. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP and Google), as well as international research institutes, such as EPFL. Currently, Andrzej acts as a consultant on technology and innovation with TIK Services (http://tik.services), and runs a peer-to-peer lending start-up. NB! All Academic Lectures are recorded. No webcast! Because of a problem of the recording equipment, this lecture will be repeated for recording pu...

  18. SBME : Exploring boundaries between formal, non-formal, and informal learning

    OpenAIRE

    Shahoumian, Armineh; Parchoma, Gale; Saunders, Murray; Hanson, Jacky; Dickinson, Mike; Pimblett, Mark

    2013-01-01

    In medical education learning extends beyond university settings into practice. Non-formal and informal learning support learners’ efforts to meet externally set and learner-identified objectives. In SBME research, boundaries between formal, non-formal, and informal learning have not been widely explored. Whether SBME fits within or challenges these categories can make a contribution. Formal learning is described in relation to educational settings, planning, assessment, and accreditation. In...

  19. Formalization of Database Systems -- and a Formal Definition of {IMS}

    DEFF Research Database (Denmark)

    Bjørner, Dines; Løvengreen, Hans Henrik

    1982-01-01

    Drawing upon an analogy between Programming Language Systems and Database Systems we outline the requirements that architectural specifications of database systems must futfitl, and argue that only formal, mathematical definitions may 6atisfy these. Then we illustrate home aspects and touch upon...... come ueee of formal definitions of data models and databaee management systems. A formal model of INS will carry this discussion. Finally we survey some of the exkting literature on formal definitions of database systems. The emphasis will be on constructive definitions in the denotationul semantics...... style of the VCM: Vienna Development Nethd. The role of formal definitions in international standardiaation efforts is briefly mentioned....

  20. Great Computational Intelligence in the Formal Sciences via Analogical Reasoning

    Science.gov (United States)

    2017-05-08

    specified Turing-solvable but timewise-demanding games of perfect information, yield human-level performance. This is certainly a mouthful, but didn’t we...the ω− rule is a finite form which still preserves negation-completeness for PA. The disadvantage is that proof-verification and discovery both fail...first such formal, and machine-verified, proofs, to our knowledge. One of 12The three-volume set is available online and free in the Univ. of

  1. Efficient computation of clipped Voronoi diagram for mesh generation

    KAUST Repository

    Yan, Dongming

    2013-04-01

    The Voronoi diagram is a fundamental geometric structure widely used in various fields, especially in computer graphics and geometry computing. For a set of points in a compact domain (i.e. a bounded and closed 2D region or a 3D volume), some Voronoi cells of their Voronoi diagram are infinite or partially outside of the domain, but in practice only the parts of the cells inside the domain are needed, as when computing the centroidal Voronoi tessellation. Such a Voronoi diagram confined to a compact domain is called a clipped Voronoi diagram. We present an efficient algorithm to compute the clipped Voronoi diagram for a set of sites with respect to a compact 2D region or a 3D volume. We also apply the proposed method to optimal mesh generation based on the centroidal Voronoi tessellation. Crown Copyright © 2011 Published by Elsevier Ltd. All rights reserved.

  2. Efficient computation of clipped Voronoi diagram for mesh generation

    KAUST Repository

    Yan, Dongming; Wang, Wen Ping; Lé vy, Bruno L.; Liu, Yang

    2013-01-01

    The Voronoi diagram is a fundamental geometric structure widely used in various fields, especially in computer graphics and geometry computing. For a set of points in a compact domain (i.e. a bounded and closed 2D region or a 3D volume), some Voronoi cells of their Voronoi diagram are infinite or partially outside of the domain, but in practice only the parts of the cells inside the domain are needed, as when computing the centroidal Voronoi tessellation. Such a Voronoi diagram confined to a compact domain is called a clipped Voronoi diagram. We present an efficient algorithm to compute the clipped Voronoi diagram for a set of sites with respect to a compact 2D region or a 3D volume. We also apply the proposed method to optimal mesh generation based on the centroidal Voronoi tessellation. Crown Copyright © 2011 Published by Elsevier Ltd. All rights reserved.

  3. Acceleration of FDTD mode solver by high-performance computing techniques.

    Science.gov (United States)

    Han, Lin; Xi, Yanping; Huang, Wei-Ping

    2010-06-21

    A two-dimensional (2D) compact finite-difference time-domain (FDTD) mode solver is developed based on wave equation formalism in combination with the matrix pencil method (MPM). The method is validated for calculation of both real guided and complex leaky modes of typical optical waveguides against the bench-mark finite-difference (FD) eigen mode solver. By taking advantage of the inherent parallel nature of the FDTD algorithm, the mode solver is implemented on graphics processing units (GPUs) using the compute unified device architecture (CUDA). It is demonstrated that the high-performance computing technique leads to significant acceleration of the FDTD mode solver with more than 30 times improvement in computational efficiency in comparison with the conventional FDTD mode solver running on CPU of a standard desktop computer. The computational efficiency of the accelerated FDTD method is in the same order of magnitude of the standard finite-difference eigen mode solver and yet require much less memory (e.g., less than 10%). Therefore, the new method may serve as an efficient, accurate and robust tool for mode calculation of optical waveguides even when the conventional eigen value mode solvers are no longer applicable due to memory limitation.

  4. SELF-EFFICACY OF FORMALLY AND NON-FORMALLY TRAINED PUBLIC SECTOR TEACHERS

    Directory of Open Access Journals (Sweden)

    Muhammad Nadeem ANWAR

    2009-07-01

    Full Text Available The main objective of the study was to compare the formally and non-formally trained in-service public sector teachers’ Self-efficacy. Five hypotheses were developed describing no difference in the self-efficacy of formally and non-formally trained teachers to influence decision making, influence school resources, instructional self-efficacy, disciplinary self-efficacy and create positive school climate. Teacher Efficacy Instrument (TSES developed by Bandura (2001 consisting of thirty 9-point items was used in the study. 342 formally trained and 255 non-formally trained respondents’ questionnaires were received out of 1500 mailed. The analysis of data revealed that the formally trained public sector teachers are high in their self-efficacy on all the five categories: to influence decision making, to influence school resources, instructional self-efficacy, disciplinary self-efficacy and self-efficacy to create positive school climate.

  5. Masses of Formal Philosophy

    DEFF Research Database (Denmark)

    Masses of Formal Philosophy is an outgrowth of Formal Philosophy. That book gathered the responses of some of the most prominent formal philosophers to five relatively open and broad questions initiating a discussion of metaphilosophical themes and problems surrounding the use of formal methods i...... in philosophy. Including contributions from a wide range of philosophers, Masses of Formal Philosophy contains important new responses to the original five questions.......Masses of Formal Philosophy is an outgrowth of Formal Philosophy. That book gathered the responses of some of the most prominent formal philosophers to five relatively open and broad questions initiating a discussion of metaphilosophical themes and problems surrounding the use of formal methods...

  6. A formal proof of the expressiveness of deep learning

    NARCIS (Netherlands)

    Bentkamp, A.; Blanchette, J.C.; Klakow, Dietrich

    2017-01-01

    Deep learning has had a profound impact on computer science in recent years, with applications to image recognition, language processing, bioinformatics, and more. Recently, Cohen et al. provided theoretical evidence for the superiority of deep learning over shallow learning. We formalized their

  7. Formal Solutions for Polarized Radiative Transfer. III. Stiffness and Instability

    Science.gov (United States)

    Janett, Gioele; Paganini, Alberto

    2018-04-01

    Efficient numerical approximation of the polarized radiative transfer equation is challenging because this system of ordinary differential equations exhibits stiff behavior, which potentially results in numerical instability. This negatively impacts the accuracy of formal solvers, and small step-sizes are often necessary to retrieve physical solutions. This work presents stability analyses of formal solvers for the radiative transfer equation of polarized light, identifies instability issues, and suggests practical remedies. In particular, the assumptions and the limitations of the stability analysis of Runge–Kutta methods play a crucial role. On this basis, a suitable and pragmatic formal solver is outlined and tested. An insightful comparison to the scalar radiative transfer equation is also presented.

  8. DDF construction and D-brane boundary states in pure spinor formalism

    International Nuclear Information System (INIS)

    Mukhopadhyay, Partha

    2006-01-01

    Open string boundary conditions for non-BPS D-branes in type II string theories discussed in hep-th/0505157 give rise to two sectors with integer (R sector) and half-integer (NS sector) modes for the combined fermionic matter and bosonic ghost variables in pure spinor formalism. Exploiting the manifest supersymmetry of the formalism we explicitly construct the DDF (Del Giudice, Di Vecchia, Fubini) states in both the sectors which are in one-to-one correspondence with the states in light-cone Green-Schwarz formalism. We also give a proof of validity of this construction. A similar construction in the closed string sector enables us to define a physical Hilbert space in pure spinor formalism which is used to project the covariant boundary states of both the BPS and non-BPS instantonic D-branes. These projected boundary states take exactly the same form as those found in light-cone Green-Schwarz formalism and are suitable for computing the cylinder diagram with manifest open-closed duality

  9. A matrix formalism to solve interface condition equations in a reactor system

    Energy Technology Data Exchange (ETDEWEB)

    Matausek, M V [Boris Kidric Institute of Nuclear Sciences Vinca, Beograd (Yugoslavia)

    1970-05-15

    When a nuclear reactor or a reactor lattice cell is treated by an approximate procedure to solve the neutron transport equation, as the last computational step often appears a problem of solving systems of algebraic equations stating the interface and boundary conditions for the neutron flux moments. These systems have usually the coefficient matrices of the block-bi diagonal type, containing thus a large number of zero elements. In the present report it is shown how such a system can be solved efficiently accounting for all the zero elements both in the coefficient matrix and in the free term vector. The procedure is presented here for the case of multigroup P{sub 3} calculation of neutron flux distribution in a cylindrical reactor lattice cell. Compared with the standard gaussian elimination method, this procedure is more advantageous both in respect to the number of operations needed to solve a given problem and in respect to the computer memory storage requirements. A similar formalism can also be applied for other approximate methods, for instance for multigroup diffusion treatment of a multi zone reactor. (author)

  10. The peak efficiency calibration of volume source using 152Eu point source in computer

    International Nuclear Information System (INIS)

    Shen Tingyun; Qian Jianfu; Nan Qinliang; Zhou Yanguo

    1997-01-01

    The author describes the method of the peak efficiency calibration of volume source by means of 152 Eu point source for HPGe γ spectrometer. The peak efficiency can be computed by Monte Carlo simulation, after inputting parameter of detector. The computation results are in agreement with the experimental results with an error of +-3.8%, with an exception one is about +-7.4%

  11. Computing handbook computer science and software engineering

    CERN Document Server

    Gonzalez, Teofilo; Tucker, Allen

    2014-01-01

    Overview of Computer Science Structure and Organization of Computing Peter J. DenningComputational Thinking Valerie BarrAlgorithms and Complexity Data Structures Mark WeissBasic Techniques for Design and Analysis of Algorithms Edward ReingoldGraph and Network Algorithms Samir Khuller and Balaji RaghavachariComputational Geometry Marc van KreveldComplexity Theory Eric Allender, Michael Loui, and Kenneth ReganFormal Models and Computability Tao Jiang, Ming Li, and Bala

  12. Computationally Efficient Prediction of Ionic Liquid Properties

    DEFF Research Database (Denmark)

    Chaban, V. V.; Prezhdo, O. V.

    2014-01-01

    Due to fundamental differences, room-temperature ionic liquids (RTIL) are significantly more viscous than conventional molecular liquids and require long simulation times. At the same time, RTILs remain in the liquid state over a much broader temperature range than the ordinary liquids. We exploit...... to ambient temperatures. We numerically prove the validity of the proposed concept for density and ionic diffusion of four different RTILs. This simple method enhances the computational efficiency of the existing simulation approaches as applied to RTILs by more than an order of magnitude....

  13. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    Science.gov (United States)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  14. A matricial approach for the Dirac-Kahler formalism

    International Nuclear Information System (INIS)

    Goto, M.

    1987-01-01

    A matricial approach for the Dirac-Kahler formalism is considered. It is shown that the matrical approach i) brings a great computational simplification compared to the common use of differential forms and that ii) by an appropriate choice of notation, it can be extended to the lattice, including a matrix Dirac-Kahler equation. (author) [pt

  15. Viscous warm inflation: Hamilton-Jacobi formalism

    Science.gov (United States)

    Akhtari, L.; Mohammadi, A.; Sayar, K.; Saaidi, Kh.

    2017-04-01

    Using Hamilton-Jacobi formalism, the scenario of warm inflation with viscous pressure is considered. The formalism gives a way of computing the slow-rolling parameter without extra approximation, and it is well-known as a powerful method in cold inflation. The model is studied in detail for three different cases of the dissipation and bulk viscous pressure coefficients. In the first case where both coefficients are taken as constant, it is shown that the case could not portray warm inflationary scenario compatible with observational data even it is possible to restrict the model parameters. For other cases, the results shows that the model could properly predicts the perturbation parameters in which they stay in perfect agreement with Planck data. As a further argument, r -ns and αs -ns are drown that show the acquired result could stand in acceptable area expressing a compatibility with observational data.

  16. Cyto-Sim: a formal language model and stochastic simulator of membrane-enclosed biochemical processes.

    Science.gov (United States)

    Sedwards, Sean; Mazza, Tommaso

    2007-10-15

    Compartments and membranes are the basis of cell topology and more than 30% of the human genome codes for membrane proteins. While it is possible to represent compartments and membrane proteins in a nominal way with many mathematical formalisms used in systems biology, few, if any, explicitly model the topology of the membranes themselves. Discrete stochastic simulation potentially offers the most accurate representation of cell dynamics. Since the details of every molecular interaction in a pathway are often not known, the relationship between chemical species in not necessarily best described at the lowest level, i.e. by mass action. Simulation is a form of computer-aided analysis, relying on human interpretation to derive meaning. To improve efficiency and gain meaning in an automatic way, it is necessary to have a formalism based on a model which has decidable properties. We present Cyto-Sim, a stochastic simulator of membrane-enclosed hierarchies of biochemical processes, where the membranes comprise an inner, outer and integral layer. The underlying model is based on formal language theory and has been shown to have decidable properties (Cavaliere and Sedwards, 2006), allowing formal analysis in addition to simulation. The simulator provides variable levels of abstraction via arbitrary chemical kinetics which link to ordinary differential equations. In addition to its compact native syntax, Cyto-Sim currently supports models described as Petri nets, can import all versions of SBML and can export SBML and MATLAB m-files. Cyto-Sim is available free, either as an applet or a stand-alone Java program via the web page (http://www.cosbi.eu/Rpty_Soft_CytoSim.php). Other versions can be made available upon request.

  17. A Formal Model and Verification Problems for Software Defined Networks

    Directory of Open Access Journals (Sweden)

    V. A. Zakharov

    2013-01-01

    Full Text Available Software-defined networking (SDN is an approach to building computer networks that separate and abstract data planes and control planes of these systems. In a SDN a centralized controller manages a distributed set of switches. A set of open commands for packet forwarding and flow-table updating was defined in the form of a protocol known as OpenFlow. In this paper we describe an abstract formal model of SDN, introduce a tentative language for specification of SDN forwarding policies, and set up formally model-checking problems for SDN.

  18. Energy-Efficient Abundant-Data Computing: The N3XT 1,000X

    OpenAIRE

    Aly Mohamed M. Sabry; Gao Mingyu; Hills Gage; Lee Chi-Shuen; Pinter Greg; Shulaker Max M.; Wu Tony F.; Asheghi Mehdi; Bokor Jeff; Franchetti Franz; Goodson Kenneth E.; Kozyrakis Christos; Markov Igor; Olukotun Kunle; Pileggi Larry

    2015-01-01

    Next generation information technologies will process unprecedented amounts of loosely structured data that overwhelm existing computing systems. N3XT improves the energy efficiency of abundant data applications 1000 fold by using new logic and memory technologies 3D integration with fine grained connectivity and new architectures for computation immersed in memory.

  19. Improving computational efficiency of Monte Carlo simulations with variance reduction

    International Nuclear Information System (INIS)

    Turner, A.; Davis, A.

    2013-01-01

    CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)

  20. Qualitative simulation in formal process modelling

    International Nuclear Information System (INIS)

    Sivertsen, Elin R.

    1999-01-01

    In relation to several different research activities at the OECD Halden Reactor Project, the usefulness of formal process models has been identified. Being represented in some appropriate representation language, the purpose of these models is to model process plants and plant automatics in a unified way to allow verification and computer aided design of control strategies. The present report discusses qualitative simulation and the tool QSIM as one approach to formal process models. In particular, the report aims at investigating how recent improvements of the tool facilitate the use of the approach in areas like process system analysis, procedure verification, and control software safety analysis. An important long term goal is to provide a basis for using qualitative reasoning in combination with other techniques to facilitate the treatment of embedded programmable systems in Probabilistic Safety Analysis (PSA). This is motivated from the potential of such a combination in safety analysis based on models comprising both software, hardware, and operator. It is anticipated that the research results from this activity will benefit V and V in a wide variety of applications where formal process models can be utilized. Examples are operator procedures, intelligent decision support systems, and common model repositories (author) (ml)

  1. Combining Formal Logic and Machine Learning for Sentiment Analysis

    DEFF Research Database (Denmark)

    Petersen, Niklas Christoffer; Villadsen, Jørgen

    2014-01-01

    This paper presents a formal logical method for deep structural analysis of the syntactical properties of texts using machine learning techniques for efficient syntactical tagging. To evaluate the method it is used for entity level sentiment analysis as an alternative to pure machine learning...

  2. An energy-efficient failure detector for vehicular cloud computing.

    Science.gov (United States)

    Liu, Jiaxi; Wu, Zhibo; Dong, Jian; Wu, Jin; Wen, Dongxin

    2018-01-01

    Failure detectors are one of the fundamental components for maintaining the high availability of vehicular cloud computing. In vehicular cloud computing, lots of RSUs are deployed along the road to improve the connectivity. Many of them are equipped with solar battery due to the unavailability or excess expense of wired electrical power. So it is important to reduce the battery consumption of RSU. However, the existing failure detection algorithms are not designed to save battery consumption RSU. To solve this problem, a new energy-efficient failure detector 2E-FD has been proposed specifically for vehicular cloud computing. 2E-FD does not only provide acceptable failure detection service, but also saves the battery consumption of RSU. Through the comparative experiments, the results show that our failure detector has better performance in terms of speed, accuracy and battery consumption.

  3. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping

    2015-06-24

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  4. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping; Huang, Jianhua Z.; Zhang, Nan

    2015-01-01

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  5. Efficient Ab-Initio Electron Transport Calculations for Heterostructures by the Nonequilibrium Green’s Function Method

    Directory of Open Access Journals (Sweden)

    Hirokazu Takaki

    2014-01-01

    Full Text Available We present an efficient computation technique for ab-initio electron transport calculations based on density functional theory and the nonequilibrium Green’s function formalism for application to heterostructures with two-dimensional (2D interfaces. The computational load for constructing the Green’s functions, which depends not only on the energy but also on the 2D Bloch wave vector along the interfaces and is thus catastrophically heavy, is circumvented by parallel computational techniques with the message passing interface, which divides the calculations of the Green’s functions with respect to energy and wave vectors. To demonstrate the computational efficiency of the present code, we perform ab-initio electron transport calculations of Al(100-Si(100-Al(100 heterostructures, one of the most typical metal-semiconductor-metal systems, and show their transmission spectra, density of states (DOSs, and dependence on the thickness of the Si layers.

  6. Quantified Event Automata: Towards Expressive and Efficient Runtime Monitors

    Science.gov (United States)

    Barringer, Howard; Falcone, Ylies; Havelund, Klaus; Reger, Giles; Rydeheard, David

    2012-01-01

    Runtime verification is the process of checking a property on a trace of events produced by the execution of a computational system. Runtime verification techniques have recently focused on parametric specifications where events take data values as parameters. These techniques exist on a spectrum inhabited by both efficient and expressive techniques. These characteristics are usually shown to be conflicting - in state-of-the-art solutions, efficiency is obtained at the cost of loss of expressiveness and vice-versa. To seek a solution to this conflict we explore a new point on the spectrum by defining an alternative runtime verification approach.We introduce a new formalism for concisely capturing expressive specifications with parameters. Our technique is more expressive than the currently most efficient techniques while at the same time allowing for optimizations.

  7. Efficient computation of the joint sample frequency spectra for multiple populations.

    Science.gov (United States)

    Kamm, John A; Terhorst, Jonathan; Song, Yun S

    2017-01-01

    A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity.

  8. Statistically and Computationally Efficient Estimating Equations for Large Spatial Datasets

    KAUST Repository

    Sun, Ying; Stein, Michael L.

    2014-01-01

    For Gaussian process models, likelihood based methods are often difficult to use with large irregularly spaced spatial datasets, because exact calculations of the likelihood for n observations require O(n3) operations and O(n2) memory. Various approximation methods have been developed to address the computational difficulties. In this paper, we propose new unbiased estimating equations based on score equation approximations that are both computationally and statistically efficient. We replace the inverse covariance matrix that appears in the score equations by a sparse matrix to approximate the quadratic forms, then set the resulting quadratic forms equal to their expected values to obtain unbiased estimating equations. The sparse matrix is constructed by a sparse inverse Cholesky approach to approximate the inverse covariance matrix. The statistical efficiency of the resulting unbiased estimating equations are evaluated both in theory and by numerical studies. Our methods are applied to nearly 90,000 satellite-based measurements of water vapor levels over a region in the Southeast Pacific Ocean.

  9. Statistically and Computationally Efficient Estimating Equations for Large Spatial Datasets

    KAUST Repository

    Sun, Ying

    2014-11-07

    For Gaussian process models, likelihood based methods are often difficult to use with large irregularly spaced spatial datasets, because exact calculations of the likelihood for n observations require O(n3) operations and O(n2) memory. Various approximation methods have been developed to address the computational difficulties. In this paper, we propose new unbiased estimating equations based on score equation approximations that are both computationally and statistically efficient. We replace the inverse covariance matrix that appears in the score equations by a sparse matrix to approximate the quadratic forms, then set the resulting quadratic forms equal to their expected values to obtain unbiased estimating equations. The sparse matrix is constructed by a sparse inverse Cholesky approach to approximate the inverse covariance matrix. The statistical efficiency of the resulting unbiased estimating equations are evaluated both in theory and by numerical studies. Our methods are applied to nearly 90,000 satellite-based measurements of water vapor levels over a region in the Southeast Pacific Ocean.

  10. The importance of training in formal methods in Software Engineering

    Directory of Open Access Journals (Sweden)

    John Polansky

    2014-12-01

    Full Text Available The paradigm of formal methods provides systematic techniques and rigorous to software develop and, due the crescent complexity and quality requirements of current products, is necessary introduce them in curriculum of software engineer. In this article is analyzed the importance of train in formal methods and described specific techniques to achieved it efficiently. This techniques are the result of an experimental process in the class room of more than fifteen years in undergraduate and graduate programs, the same as company training. Also are presented a proposal a curriculum to systematic introduction of this paradigm and description of a program in training methods that has been success to industry. Results shows that students gain confidence in formal methods just when found out of the benefits of this in the context of software engineer.

  11. Efficient scatter model for simulation of ultrasound images from computed tomography data

    Science.gov (United States)

    D'Amato, J. P.; Lo Vercio, L.; Rubi, P.; Fernandez Vera, E.; Barbuzza, R.; Del Fresno, M.; Larrabide, I.

    2015-12-01

    Background and motivation: Real-time ultrasound simulation refers to the process of computationally creating fully synthetic ultrasound images instantly. Due to the high value of specialized low cost training for healthcare professionals, there is a growing interest in the use of this technology and the development of high fidelity systems that simulate the acquisitions of echographic images. The objective is to create an efficient and reproducible simulator that can run either on notebooks or desktops using low cost devices. Materials and methods: We present an interactive ultrasound simulator based on CT data. This simulator is based on ray-casting and provides real-time interaction capabilities. The simulation of scattering that is coherent with the transducer position in real time is also introduced. Such noise is produced using a simplified model of multiplicative noise and convolution with point spread functions (PSF) tailored for this purpose. Results: The computational efficiency of scattering maps generation was revised with an improved performance. This allowed a more efficient simulation of coherent scattering in the synthetic echographic images while providing highly realistic result. We describe some quality and performance metrics to validate these results, where a performance of up to 55fps was achieved. Conclusion: The proposed technique for real-time scattering modeling provides realistic yet computationally efficient scatter distributions. The error between the original image and the simulated scattering image was compared for the proposed method and the state-of-the-art, showing negligible differences in its distribution.

  12. A synthetic visual plane algorithm for visibility computation in consideration of accuracy and efficiency

    Science.gov (United States)

    Yu, Jieqing; Wu, Lixin; Hu, Qingsong; Yan, Zhigang; Zhang, Shaoliang

    2017-12-01

    Visibility computation is of great interest to location optimization, environmental planning, ecology, and tourism. Many algorithms have been developed for visibility computation. In this paper, we propose a novel method of visibility computation, called synthetic visual plane (SVP), to achieve better performance with respect to efficiency, accuracy, or both. The method uses a global horizon, which is a synthesis of line-of-sight information of all nearer points, to determine the visibility of a point, which makes it an accurate visibility method. We used discretization of horizon to gain a good performance in efficiency. After discretization, the accuracy and efficiency of SVP depends on the scale of discretization (i.e., zone width). The method is more accurate at smaller zone widths, but this requires a longer operating time. Users must strike a balance between accuracy and efficiency at their discretion. According to our experiments, SVP is less accurate but more efficient than R2 if the zone width is set to one grid. However, SVP becomes more accurate than R2 when the zone width is set to 1/24 grid, while it continues to perform as fast or faster than R2. Although SVP performs worse than reference plane and depth map with respect to efficiency, it is superior in accuracy to these other two algorithms.

  13. Energy efficient hybrid computing systems using spin devices

    Science.gov (United States)

    Sharad, Mrigank

    Emerging spin-devices like magnetic tunnel junctions (MTJ's), spin-valves and domain wall magnets (DWM) have opened new avenues for spin-based logic design. This work explored potential computing applications which can exploit such devices for higher energy-efficiency and performance. The proposed applications involve hybrid design schemes, where charge-based devices supplement the spin-devices, to gain large benefits at the system level. As an example, lateral spin valves (LSV) involve switching of nanomagnets using spin-polarized current injection through a metallic channel such as Cu. Such spin-torque based devices possess several interesting properties that can be exploited for ultra-low power computation. Analog characteristic of spin current facilitate non-Boolean computation like majority evaluation that can be used to model a neuron. The magneto-metallic neurons can operate at ultra-low terminal voltage of ˜20mV, thereby resulting in small computation power. Moreover, since nano-magnets inherently act as memory elements, these devices can facilitate integration of logic and memory in interesting ways. The spin based neurons can be integrated with CMOS and other emerging devices leading to different classes of neuromorphic/non-Von-Neumann architectures. The spin-based designs involve `mixed-mode' processing and hence can provide very compact and ultra-low energy solutions for complex computation blocks, both digital as well as analog. Such low-power, hybrid designs can be suitable for various data processing applications like cognitive computing, associative memory, and currentmode on-chip global interconnects. Simulation results for these applications based on device-circuit co-simulation framework predict more than ˜100x improvement in computation energy as compared to state of the art CMOS design, for optimal spin-device parameters.

  14. Why formal learning theory matters for cognitive science.

    Science.gov (United States)

    Fulop, Sean; Chater, Nick

    2013-01-01

    This article reviews a number of different areas in the foundations of formal learning theory. After outlining the general framework for formal models of learning, the Bayesian approach to learning is summarized. This leads to a discussion of Solomonoff's Universal Prior Distribution for Bayesian learning. Gold's model of identification in the limit is also outlined. We next discuss a number of aspects of learning theory raised in contributed papers, related to both computational and representational complexity. The article concludes with a description of how semi-supervised learning can be applied to the study of cognitive learning models. Throughout this overview, the specific points raised by our contributing authors are connected to the models and methods under review. Copyright © 2013 Cognitive Science Society, Inc.

  15. A new computationally-efficient two-dimensional model for boron implantation into single-crystal silicon

    International Nuclear Information System (INIS)

    Klein, K.M.; Park, C.; Yang, S.; Morris, S.; Do, V.; Tasch, F.

    1992-01-01

    We have developed a new computationally-efficient two-dimensional model for boron implantation into single-crystal silicon. This paper reports that this new model is based on the dual Pearson semi-empirical implant depth profile model and the UT-MARLOWE Monte Carlo boron ion implantation model. This new model can predict with very high computational efficiency two-dimensional as-implanted boron profiles as a function of energy, dose, tilt angle, rotation angle, masking edge orientation, and masking edge thickness

  16. Adding computationally efficient realism to Monte Carlo turbulence simulation

    Science.gov (United States)

    Campbell, C. W.

    1985-01-01

    Frequently in aerospace vehicle flight simulation, random turbulence is generated using the assumption that the craft is small compared to the length scales of turbulence. The turbulence is presumed to vary only along the flight path of the vehicle but not across the vehicle span. The addition of the realism of three-dimensionality is a worthy goal, but any such attempt will not gain acceptance in the simulator community unless it is computationally efficient. A concept for adding three-dimensional realism with a minimum of computational complexity is presented. The concept involves the use of close rational approximations to irrational spectra and cross-spectra so that systems of stable, explicit difference equations can be used to generate the turbulence.

  17. SEDRX: A computer program for the simulation Si(Li) and Ge(Hp) x-ray detectors efficiency

    International Nuclear Information System (INIS)

    Benamar, M.A.; Benouali, A.; Tchantchane, A.; Azbouche, A.; Tobbeche, S. Centre de Developpement des Techniques Nucleaires, Algiers; Labo. des Techniques Nucleaires)

    1992-12-01

    The difficulties encountered in measuring the x-ray detectors efficiency has motivated to develop a computer program to simulate this parameter. this program computes the efficiency of detectors as a function of energy. the computation of this parameter is based on the fitting coefficients of absorption in the case of photoelectric, coherent and incoherent factors. These coefficients are given by Mc Master library or may be determined by the interpolation based on cubic splines

  18. Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo

    Science.gov (United States)

    McDaniel, T.; D'Azevedo, E. F.; Li, Y. W.; Wong, K.; Kent, P. R. C.

    2017-11-01

    Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.

  19. Defining nuclear medical file formal based on DICOM standard

    International Nuclear Information System (INIS)

    He Bin; Jin Yongjie; Li Yulan

    2001-01-01

    With the wide application of computer technology in medical area, DICOM is becoming the standard of digital imaging and communication. The author discusses how to define medical imaging file formal based on DICOM standard. It also introduces the format of ANMIS system the authors defined the validity and integrality of this format

  20. Building ontologies with basic formal ontology

    CERN Document Server

    Arp, Robert; Spear, Andrew D.

    2015-01-01

    In the era of "big data," science is increasingly information driven, and the potential for computers to store, manage, and integrate massive amounts of data has given rise to such new disciplinary fields as biomedical informatics. Applied ontology offers a strategy for the organization of scientific information in computer-tractable form, drawing on concepts not only from computer and information science but also from linguistics, logic, and philosophy. This book provides an introduction to the field of applied ontology that is of particular relevance to biomedicine, covering theoretical components of ontologies, best practices for ontology design, and examples of biomedical ontologies in use. After defining an ontology as a representation of the types of entities in a given domain, the book distinguishes between different kinds of ontologies and taxonomies, and shows how applied ontology draws on more traditional ideas from metaphysics. It presents the core features of the Basic Formal Ontology (BFO), now u...

  1. Application of the Convolution Formalism to the Ocean Tide Potential: Results from the Gravity and Recovery and Climate Experiment (GRACE)

    Science.gov (United States)

    Desai, S. D.; Yuan, D. -N.

    2006-01-01

    A computationally efficient approach to reducing omission errors in ocean tide potential models is derived and evaluated using data from the Gravity Recovery and Climate Experiment (GRACE) mission. Ocean tide height models are usually explicitly available at a few frequencies, and a smooth unit response is assumed to infer the response across the tidal spectrum. The convolution formalism of Munk and Cartwright (1966) models this response function with a Fourier series. This allows the total ocean tide height, and therefore the total ocean tide potential, to be modeled as a weighted sum of past, present, and future values of the tide-generating potential. Previous applications of the convolution formalism have usually been limited to tide height models, but we extend it to ocean tide potential models. We use luni-solar ephemerides to derive the required tide-generating potential so that the complete spectrum of the ocean tide potential is efficiently represented. In contrast, the traditionally adopted harmonic model of the ocean tide potential requires the explicit sum of the contributions from individual tidal frequencies. It is therefore subject to omission errors from neglected frequencies and is computationally more intensive. Intersatellite range rate data from the GRACE mission are used to compare convolution and harmonic models of the ocean tide potential. The monthly range rate residual variance is smaller by 4-5%, and the daily residual variance is smaller by as much as 15% when using the convolution model than when using a harmonic model that is defined by twice the number of parameters.

  2. Non-BPS D-branes in light-cone Green-Schwarz formalism

    International Nuclear Information System (INIS)

    Mukhopadhyay, Partha

    2005-01-01

    Non-BPS D-branes are difficult to describe covariantly in a manifestly supersymmetric formalism. For definiteness we concentrate on type-IIB string theory in flat background in light-cone Green-Schwarz formalism. We study both the boundary state and the boundary conformal field theory descriptions of these D-branes with manifest SO(8) covariance and go through various consistency checks. We analyze Sen's original construction of non-BPS D-branes given in terms of an orbifold boundary conformal field theory. We also directly study the relevant world-sheet theory by deriving the open string boundary condition from the covariant boundary state. Both these methods give the same open string spectrum which is consistent with the boundary state, as required by the world-sheet duality. The boundary condition found in the second method is given in terms of bi-local fields that are quadratic in Green-Schwarz fermions. We design a special 'doubling trick' suitable to handle such boundary conditions and prescribe rules for computing all possible correlation functions without boundary insertions. This prescription has been tested by computing disk one-point functions of several classes of closed string states and comparing the results with the boundary state computation. (author)

  3. A highly efficient parallel algorithm for solving the neutron diffusion nodal equations on shared-memory computers

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Kirk, B.L.

    1990-01-01

    Modern parallel computer architectures offer an enormous potential for reducing CPU and wall-clock execution times of large-scale computations commonly performed in various applications in science and engineering. Recently, several authors have reported their efforts in developing and implementing parallel algorithms for solving the neutron diffusion equation on a variety of shared- and distributed-memory parallel computers. Testing of these algorithms for a variety of two- and three-dimensional meshes showed significant speedup of the computation. Even for very large problems (i.e., three-dimensional fine meshes) executed concurrently on a few nodes in serial (nonvector) mode, however, the measured computational efficiency is very low (40 to 86%). In this paper, the authors present a highly efficient (∼85 to 99.9%) algorithm for solving the two-dimensional nodal diffusion equations on the Sequent Balance 8000 parallel computer. Also presented is a model for the performance, represented by the efficiency, as a function of problem size and the number of participating processors. The model is validated through several tests and then extrapolated to larger problems and more processors to predict the performance of the algorithm in more computationally demanding situations

  4. A Computational Framework for Efficient Low Temperature Plasma Simulations

    Science.gov (United States)

    Verma, Abhishek Kumar; Venkattraman, Ayyaswamy

    2016-10-01

    Over the past years, scientific computing has emerged as an essential tool for the investigation and prediction of low temperature plasmas (LTP) applications which includes electronics, nanomaterial synthesis, metamaterials etc. To further explore the LTP behavior with greater fidelity, we present a computational toolbox developed to perform LTP simulations. This framework will allow us to enhance our understanding of multiscale plasma phenomenon using high performance computing tools mainly based on OpenFOAM FVM distribution. Although aimed at microplasma simulations, the modular framework is able to perform multiscale, multiphysics simulations of physical systems comprises of LTP. Some salient introductory features are capability to perform parallel, 3D simulations of LTP applications on unstructured meshes. Performance of the solver is tested based on numerical results assessing accuracy and efficiency of benchmarks for problems in microdischarge devices. Numerical simulation of microplasma reactor at atmospheric pressure with hemispherical dielectric coated electrodes will be discussed and hence, provide an overview of applicability and future scope of this framework.

  5. The fusion of biology, computer science, and engineering: towards efficient and successful synthetic biology.

    Science.gov (United States)

    Linshiz, Gregory; Goldberg, Alex; Konry, Tania; Hillson, Nathan J

    2012-01-01

    Synthetic biology is a nascent field that emerged in earnest only around the turn of the millennium. It aims to engineer new biological systems and impart new biological functionality, often through genetic modifications. The design and construction of new biological systems is a complex, multistep process, requiring multidisciplinary collaborative efforts from "fusion" scientists who have formal training in computer science or engineering, as well as hands-on biological expertise. The public has high expectations for synthetic biology and eagerly anticipates the development of solutions to the major challenges facing humanity. This article discusses laboratory practices and the conduct of research in synthetic biology. It argues that the fusion science approach, which integrates biology with computer science and engineering best practices, including standardization, process optimization, computer-aided design and laboratory automation, miniaturization, and systematic management, will increase the predictability and reproducibility of experiments and lead to breakthroughs in the construction of new biological systems. The article also discusses several successful fusion projects, including the development of software tools for DNA construction design automation, recursive DNA construction, and the development of integrated microfluidics systems.

  6. Formality in Brackets

    DEFF Research Database (Denmark)

    Garsten, Christina; Nyqvist, Anette

    Ethnographic work in formal organizations involves learning to recognize the many layers of front stage and back stage of organized life, and to bracket formality. It means to be alert to the fact that what is formal and front stage for one some actors, and in some situations, may in fact be back...... stage and informal for others. Walking the talk, donning the appropriate attire, wearing the proper suit, may be part of what is takes to figure out the code of formal organizational settings – an entrance ticket to the backstage, as it were. Oftentimes, it involves a degree of mimicry, of ‘following...... suits’ (Nyqvist 2013), and of doing ‘ethnography by failure’ (Garsten 2013). In this paper, we explore the layers of informality and formality in our fieldwork experiences among financial investors and policy experts, and discuss how to ethnographically represent embodied fieldwork practices. How do we...

  7. COMPUTATIONAL EFFICIENCY OF A MODIFIED SCATTERING KERNEL FOR FULL-COUPLED PHOTON-ELECTRON TRANSPORT PARALLEL COMPUTING WITH UNSTRUCTURED TETRAHEDRAL MESHES

    Directory of Open Access Journals (Sweden)

    JONG WOON KIM

    2014-04-01

    In this paper, we introduce a modified scattering kernel approach to avoid the unnecessarily repeated calculations involved with the scattering source calculation, and used it with parallel computing to effectively reduce the computation time. Its computational efficiency was tested for three-dimensional full-coupled photon-electron transport problems using our computer program which solves the multi-group discrete ordinates transport equation by using the discontinuous finite element method with unstructured tetrahedral meshes for complicated geometrical problems. The numerical tests show that we can improve speed up to 17∼42 times for the elapsed time per iteration using the modified scattering kernel, not only in the single CPU calculation but also in the parallel computing with several CPUs.

  8. Solving the Coupled System Improves Computational Efficiency of the Bidomain Equations

    KAUST Repository

    Southern, J.A.

    2009-10-01

    The bidomain equations are frequently used to model the propagation of cardiac action potentials across cardiac tissue. At the whole organ level, the size of the computational mesh required makes their solution a significant computational challenge. As the accuracy of the numerical solution cannot be compromised, efficiency of the solution technique is important to ensure that the results of the simulation can be obtained in a reasonable time while still encapsulating the complexities of the system. In an attempt to increase efficiency of the solver, the bidomain equations are often decoupled into one parabolic equation that is computationally very cheap to solve and an elliptic equation that is much more expensive to solve. In this study, the performance of this uncoupled solution method is compared with an alternative strategy in which the bidomain equations are solved as a coupled system. This seems counterintuitive as the alternative method requires the solution of a much larger linear system at each time step. However, in tests on two 3-D rabbit ventricle benchmarks, it is shown that the coupled method is up to 80% faster than the conventional uncoupled method-and that parallel performance is better for the larger coupled problem.

  9. Solving the Coupled System Improves Computational Efficiency of the Bidomain Equations

    KAUST Repository

    Southern, J.A.; Plank, G.; Vigmond, E.J.; Whiteley, J.P.

    2009-01-01

    The bidomain equations are frequently used to model the propagation of cardiac action potentials across cardiac tissue. At the whole organ level, the size of the computational mesh required makes their solution a significant computational challenge. As the accuracy of the numerical solution cannot be compromised, efficiency of the solution technique is important to ensure that the results of the simulation can be obtained in a reasonable time while still encapsulating the complexities of the system. In an attempt to increase efficiency of the solver, the bidomain equations are often decoupled into one parabolic equation that is computationally very cheap to solve and an elliptic equation that is much more expensive to solve. In this study, the performance of this uncoupled solution method is compared with an alternative strategy in which the bidomain equations are solved as a coupled system. This seems counterintuitive as the alternative method requires the solution of a much larger linear system at each time step. However, in tests on two 3-D rabbit ventricle benchmarks, it is shown that the coupled method is up to 80% faster than the conventional uncoupled method-and that parallel performance is better for the larger coupled problem.

  10. Software Formal Inspections Guidebook

    Science.gov (United States)

    1993-01-01

    The Software Formal Inspections Guidebook is designed to support the inspection process of software developed by and for NASA. This document provides information on how to implement a recommended and proven method for conducting formal inspections of NASA software. This Guidebook is a companion document to NASA Standard 2202-93, Software Formal Inspections Standard, approved April 1993, which provides the rules, procedures, and specific requirements for conducting software formal inspections. Application of the Formal Inspections Standard is optional to NASA program or project management. In cases where program or project management decide to use the formal inspections method, this Guidebook provides additional information on how to establish and implement the process. The goal of the formal inspections process as documented in the above-mentioned Standard and this Guidebook is to provide a framework and model for an inspection process that will enable the detection and elimination of defects as early as possible in the software life cycle. An ancillary aspect of the formal inspection process incorporates the collection and analysis of inspection data to effect continual improvement in the inspection process and the quality of the software subjected to the process.

  11. Computer-aided modeling framework for efficient model development, analysis and identification

    DEFF Research Database (Denmark)

    Heitzig, Martina; Sin, Gürkan; Sales Cruz, Mauricio

    2011-01-01

    Model-based computer aided product-process engineering has attained increased importance in a number of industries, including pharmaceuticals, petrochemicals, fine chemicals, polymers, biotechnology, food, energy, and water. This trend is set to continue due to the substantial benefits computer-aided...... methods introduce. The key prerequisite of computer-aided product-process engineering is however the availability of models of different types, forms, and application modes. The development of the models required for the systems under investigation tends to be a challenging and time-consuming task....... The methodology has been implemented into a computer-aided modeling framework, which combines expert skills, tools, and database connections that are required for the different steps of the model development work-flow with the goal to increase the efficiency of the modeling process. The framework has two main...

  12. Spin density and orbital optimization in open shell systems: A rational and computationally efficient proposal

    Energy Technology Data Exchange (ETDEWEB)

    Giner, Emmanuel, E-mail: gnrmnl@unife.it; Angeli, Celestino, E-mail: anc@unife.it [Dipartimento di Scienze Chimiche e Famaceutiche, Universita di Ferrara, Via Fossato di Mortara 17, I-44121 Ferrara (Italy)

    2016-03-14

    The present work describes a new method to compute accurate spin densities for open shell systems. The proposed approach follows two steps: first, it provides molecular orbitals which correctly take into account the spin delocalization; second, a proper CI treatment allows to account for the spin polarization effect while keeping a restricted formalism and avoiding spin contamination. The main idea of the optimization procedure is based on the orbital relaxation of the various charge transfer determinants responsible for the spin delocalization. The algorithm is tested and compared to other existing methods on a series of organic and inorganic open shell systems. The results reported here show that the new approach (almost black-box) provides accurate spin densities at a reasonable computational cost making it suitable for a systematic study of open shell systems.

  13. Efficient sensor selection for active information fusion.

    Science.gov (United States)

    Zhang, Yongmian; Ji, Qiang

    2010-06-01

    In our previous paper, we formalized an active information fusion framework based on dynamic Bayesian networks to provide active information fusion. This paper focuses on a central issue of active information fusion, i.e., the efficient identification of a subset of sensors that are most decision relevant and cost effective. Determining the most informative and cost-effective sensors requires an evaluation of all the possible subsets of sensors, which is computationally intractable, particularly when information-theoretic criterion such as mutual information is used. To overcome this challenge, we propose a new quantitative measure for sensor synergy based on which a sensor synergy graph is constructed. Using the sensor synergy graph, we first introduce an alternative measure to multisensor mutual information for characterizing the sensor information gain. We then propose an approximated nonmyopic sensor selection method that can efficiently and near-optimally select a subset of sensors for active fusion. The simulation study demonstrates both the performance and the efficiency of the proposed sensor selection method.

  14. The formal specification of abstract data types and their implementation in Fortran 90: implementation issues concerning the use of pointers

    Science.gov (United States)

    Maley, D.; Kilpatrick, P. L.; Schreiner, E. W.; Scott, N. S.; Diercksen, G. H. F.

    1996-10-01

    In this paper we continue our investigation into the development of computational-science software based on the identification and formal specification of Abstract Data Types (ADTs) and their implementation in Fortran 90. In particular, we consider the consequences of using pointers when implementing a formally specified ADT in Fortran 90. Our aim is to highlight the resulting conflict between the goal of information hiding, which is central to the ADT methodology, and the space efficiency of the implementation. We show that the issue of storage recovery cannot be avoided by the ADT user, and present a range of implementations of a simple ADT to illustrate various approaches towards satisfactory storage management. Finally, we propose a set of guidelines for implementing ADTs using pointers in Fortran 90. These guidelines offer a way gracefully to provide disposal operations in Fortran 90. Such an approach is desirable since Fortran 90 does not provide automatic garbage collection which is offered by many object-oriented languages including Eiffel, Java, Smalltalk, and Simula.

  15. Understanding the Quantum Computational Speed-up via De-quantisation

    Directory of Open Access Journals (Sweden)

    Cristian S. Calude

    2010-06-01

    Full Text Available While it seems possible that quantum computers may allow for algorithms offering a computational speed-up over classical algorithms for some problems, the issue is poorly understood. We explore this computational speed-up by investigating the ability to de-quantise quantum algorithms into classical simulations of the algorithms which are as efficient in both time and space as the original quantum algorithms. The process of de-quantisation helps formulate conditions to determine if a quantum algorithm provides a real speed-up over classical algorithms. These conditions can be used to develop new quantum algorithms more effectively (by avoiding features that could allow the algorithm to be efficiently classically simulated, as well as providing the potential to create new classical algorithms (by using features which have proved valuable for quantum algorithms. Results on many different methods of de-quantisations are presented, as well as a general formal definition of de-quantisation. De-quantisations employing higher-dimensional classical bits, as well as those using matrix-simulations, put emphasis on entanglement in quantum algorithms; a key result is that any algorithm in which the entanglement is bounded is de-quantisable. These methods are contrasted with the stabiliser formalism de-quantisations due to the Gottesman-Knill Theorem, as well as those which take advantage of the topology of the circuit for a quantum algorithm. The benefits of the different methods are contrasted, and the importance of a range of techniques is emphasised. We further discuss some features of quantum algorithms which current de-quantisation methods do not cover.

  16. Fast Computations for Measures of Phylogenetic Beta Diversity.

    Directory of Open Access Journals (Sweden)

    Constantinos Tsirogiannis

    Full Text Available For many applications in ecology, it is important to examine the phylogenetic relations between two communities of species. More formally, let [Formula: see text] be a phylogenetic tree and let A and B be two samples of its tips, representing the examined communities. We want to compute a value that expresses the phylogenetic diversity between A and B in [Formula: see text]. There exist several measures that can do this; these are the so-called phylogenetic beta diversity (β-diversity measures. Two popular measures of this kind are the Community Distance (CD and the Common Branch Length (CBL. In most applications, it is not sufficient to compute the value of a beta diversity measure for two communities A and B; we also want to know if this value is relatively large or small compared to all possible pairs of communities in [Formula: see text] that have the same size. To decide this, the ideal approach is to compute a standardised index that involves the mean and the standard deviation of this measure among all pairs of species samples that have the same number of elements as A and B. However, no method exists for computing exactly and efficiently this index for CD and CBL. We present analytical expressions for computing the expectation and the standard deviation of CD and CBL. Based on these expressions, we describe efficient algorithms for computing the standardised indices of the two measures. Using standard algorithmic analysis, we provide guarantees on the theoretical efficiency of our algorithms. We implemented our algorithms and measured their efficiency in practice. Our implementations compute the standardised indices of CD and CBL in less than twenty seconds for a hundred pairs of samples on trees with 7 ⋅ 10(4 tips. Our implementations are available through the R package PhyloMeasures.

  17. Abstraction/Representation Theory for heterotic physical computing.

    Science.gov (United States)

    Horsman, D C

    2015-07-28

    We give a rigorous framework for the interaction of physical computing devices with abstract computation. Device and program are mediated by the non-logical representation relation; we give the conditions under which representation and device theory give rise to commuting diagrams between logical and physical domains, and the conditions for computation to occur. We give the interface of this new framework with currently existing formal methods, showing in particular its close relationship to refinement theory, and the implications for questions of meaning and reference in theoretical computer science. The case of hybrid computing is considered in detail, addressing in particular the example of an Internet-mediated social machine, and the abstraction/representation framework used to provide a formal distinction between heterotic and hybrid computing. This forms the basis for future use of the framework in formal treatments of non-standard physical computers. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  18. Formal, Non-Formal and Informal Learning in the Sciences

    Science.gov (United States)

    Ainsworth, Heather L.; Eaton, Sarah Elaine

    2010-01-01

    This research report investigates the links between formal, non-formal and informal learning and the differences between them. In particular, the report aims to link these notions of learning to the field of sciences and engineering in Canada and the United States, including professional development of adults working in these fields. It offers…

  19. A call for formal telemedicine training during stroke fellowship

    Science.gov (United States)

    Jia, Judy; Gildersleeve, Kasey; Ankrom, Christy; Cai, Chunyan; Rahbar, Mohammad; Savitz, Sean I.; Wu, Tzu-Ching

    2016-01-01

    During the 20 years since US Food and Drug Administration approval of IV tissue plasminogen activator for acute ischemic stroke, vascular neurology consultation via telemedicine has contributed to an increased frequency of IV tissue plasminogen activator administration and broadened geographic access to the drug. Nevertheless, a growing demand for acute stroke coverage persists, with the greatest disparity found in rural communities underserved by neurologists. To provide efficient and consistent acute care, formal training in telemedicine during neurovascular fellowship is warranted. Herein, we describe our experiences incorporating telestroke into the vascular neurology fellowship curriculum and propose recommendations on integrating formal telemedicine training into the Accreditation Council for Graduate Medical Education vascular neurology fellowship. PMID:27016522

  20. Point kinetics model with one-dimensional (radial) heat conduction formalism

    International Nuclear Information System (INIS)

    Jain, V.K.

    1989-01-01

    A point-kinetics model with one-dimensional (radial) heat conduction formalism has been developed. The heat conduction formalism is based on corner-mesh finite difference method. To get average temperatures in various conducting regions, a novel weighting scheme has been devised. The heat conduction model has been incorporated in the point-kinetics code MRTF-FUEL. The point-kinetics equations are solved using the method of real integrating factors. It has been shown by analysing the simulation of hypothetical loss of regulation accident in NAPP reactor that the model is superior to the conventional one in accuracy and speed of computation. (author). 3 refs., 3 tabs

  1. MatLab Programming for Engineers Having No Formal Programming Knowledge

    Science.gov (United States)

    Shaykhian, Linda H.; Shaykhian, Gholam Ali

    2007-01-01

    MatLab is one of the most widely used very high level programming languages for Scientific and engineering computations. It is very user-friendly and needs practically no formal programming knowledge. Presented here are MatLab programming aspects and not just the MatLab commands for scientists and engineers who do not have formal programming training and also have no significant time to spare for learning programming to solve their real world problems. Specifically provided are programs for visualization. Also, stated are the current limitations of the MatLab, which possibly can be taken care of by Mathworks Inc. in a future version to make MatLab more versatile.

  2. Fear of the Formal

    DEFF Research Database (Denmark)

    du Gay, Paul; Lopdrup-Hjorth, Thomas

    Over recent decades, institutions exhibiting high degrees of formality have come in for severe criticism. From the private to the public sector, and across a whole spectrum of actors spanning from practitioners to academics, formal organization is viewed with increasing doubt and skepticism....... In a “Schumpetarian world” (Teece et al., 1997: 509) of dynamic competition and incessant reform, formal organization appears as well suited to survival as a fish out of water. Indeed, formal organization, and its closely overlapping semantic twin bureaucracy, are not only represented as ill suited to the realities...... is that formal organization is an obstacle to be overcome. For that very reason, critics, intellectuals and reformers alike have urged public and private organizations to break out of the stifling straightjacket of formality, to dispense with bureaucracy, and to tear down hierarchies. This could either be done...

  3. Secure Computation, I/O-Efficient Algorithms and Distributed Signatures

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Kölker, Jonas; Toft, Tomas

    2012-01-01

    values of form r, gr for random secret-shared r ∈ ℤq and gr in a group of order q. This costs a constant number of exponentiation per player per value generated, even if less than n/3 players are malicious. This can be used for efficient distributed computing of Schnorr signatures. We further develop...... the technique so we can sign secret data in a distributed fashion at essentially the same cost....

  4. Moving interprofessional learning forward through formal assessment.

    Science.gov (United States)

    Stone, Judy

    2010-04-01

    There is increasing agreement that graduates who finish tertiary education with the full complement of skills and knowledge required for their designated profession are not 'work-ready' unless they also acquire interpersonal, collaborative practice and team-working capabilities. Health workers are unable to contribute to organisational culture in a positive way unless they too attain these capabilities. These capabilities have been shown to improve health care in terms of patient safety, worker satisfaction and health service efficiency. Given the importance of interprofessional learning (IPL) which seeks to address these capabilities, why is IPL not consistently embedded into the education of undergraduates, postgraduates and vocationally qualified personnel through formal assessment? This paper offers an argument for the formal assessment of IPL. It illustrates how the interests of the many stakeholders in IPL can benefit from, and contribute to, the integration of IPL into mainstream professional development and tertiary education. It offers practical examples of assessment in IPL which could drive learning and offer authentic, contextual teaching and learning experiences to undergraduates and health workers alike. Assessment drives learning and without formal assessment IPL will continue to be viewed as an optional topic of little relative importance for learners. In order to make the next step forward, IPL needs to be recognised and endorsed through formal assessment, both at the tertiary education level and within the workplace environment. This is supported by workforce initiatives and tertiary education policy which can be used to specify the capabilities or generic skills necessary for effective teamwork and collaborative practice.

  5. F-Nets and Software Cabling: Deriving a Formal Model and Language for Portable Parallel Programming

    Science.gov (United States)

    DiNucci, David C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Parallel programming is still being based upon antiquated sequence-based definitions of the terms "algorithm" and "computation", resulting in programs which are architecture dependent and difficult to design and analyze. By focusing on obstacles inherent in existing practice, a more portable model is derived here, which is then formalized into a model called Soviets which utilizes a combination of imperative and functional styles. This formalization suggests more general notions of algorithm and computation, as well as insights into the meaning of structured programming in a parallel setting. To illustrate how these principles can be applied, a very-high-level graphical architecture-independent parallel language, called Software Cabling, is described, with many of the features normally expected from today's computer languages (e.g. data abstraction, data parallelism, and object-based programming constructs).

  6. Necessity of Integral Formalism

    International Nuclear Information System (INIS)

    Tao Yong

    2011-01-01

    To describe the physical reality, there are two ways of constructing the dynamical equation of field, differential formalism and integral formalism. The importance of this fact is firstly emphasized by Yang in case of gauge field [Phys. Rev. Lett. 33 (1974) 445], where the fact has given rise to a deeper understanding for Aharonov-Bohm phase and magnetic monopole [Phys. Rev. D 12 (1975) 3845]. In this paper we shall point out that such a fact also holds in general wave function of matter, it may give rise to a deeper understanding for Berry phase. Most importantly, we shall prove a point that, for general wave function of matter, in the adiabatic limit, there is an intrinsic difference between its integral formalism and differential formalism. It is neglect of this difference that leads to an inconsistency of quantum adiabatic theorem pointed out by Marzlin and Sanders [Phys. Rev. Lett. 93 (2004) 160408]. It has been widely accepted that there is no physical difference of using differential operator or integral operator to construct the dynamical equation of field. Nevertheless, our study shows that the Schrödinger differential equation (i.e., differential formalism for wave function) shall lead to vanishing Berry phase and that the Schrödinger integral equation (i.e., integral formalism for wave function), in the adiabatic limit, can satisfactorily give the Berry phase. Therefore, we reach a conclusion: There are two ways of describing physical reality, differential formalism and integral formalism; but the integral formalism is a unique way of complete description. (general)

  7. Using Formal Methods to Cultivate Trust in Smart Card Operating Systems

    NARCIS (Netherlands)

    Alberda, Marjan I.; Hartel, Pieter H.; de Jong, Eduard K.

    To be widely accepted, smart cards must contain completely trustworthy software. Because smart cards contain relatively simple computers, and are used only for a specific class of applications, it is feasible to make the language used to program the software components focused and tiny. Formal

  8. HAlign-II: efficient ultra-large multiple sequence alignment and phylogenetic tree reconstruction with distributed and parallel computing.

    Science.gov (United States)

    Wan, Shixiang; Zou, Quan

    2017-01-01

    Multiple sequence alignment (MSA) plays a key role in biological sequence analyses, especially in phylogenetic tree construction. Extreme increase in next-generation sequencing results in shortage of efficient ultra-large biological sequence alignment approaches for coping with different sequence types. Distributed and parallel computing represents a crucial technique for accelerating ultra-large (e.g. files more than 1 GB) sequence analyses. Based on HAlign and Spark distributed computing system, we implement a highly cost-efficient and time-efficient HAlign-II tool to address ultra-large multiple biological sequence alignment and phylogenetic tree construction. The experiments in the DNA and protein large scale data sets, which are more than 1GB files, showed that HAlign II could save time and space. It outperformed the current software tools. HAlign-II can efficiently carry out MSA and construct phylogenetic trees with ultra-large numbers of biological sequences. HAlign-II shows extremely high memory efficiency and scales well with increases in computing resource. THAlign-II provides a user-friendly web server based on our distributed computing infrastructure. HAlign-II with open-source codes and datasets was established at http://lab.malab.cn/soft/halign.

  9. Local field distribution near corrugated interfaces: Green function formalism versus effective medium theory

    International Nuclear Information System (INIS)

    Choy, C.W.; Xiao, J.J.; Yu, K.W.

    2007-01-01

    The recent Green function formalism (GFF) has been used to study the local field distribution near a periodic interface separating two homogeneous media of different dielectric constants. In the GFF, the integral equations can be solved conveniently because of the existence of an analytic expression for the kernel (Greenian). However, due to a severe singularity in the Greenian, the formalism was formerly applied to compute the electric fields away from the interface region. In this work, we have succeeded in extending the GFF to compute the electric field inside the interface region by taking advantage of a sum rule. To our surprise, the strengths of the electric fields are quite similar in both media across the interface, despite of the large difference in dielectric constants. Moreover, we propose a simple effective medium approximation (EMA) to compute the electric field inside the interface region. We show that the EMA can indeed give an excellent description of the electric field, except near a surface plasmon resonance

  10. Pragmatics for formal semantics

    DEFF Research Database (Denmark)

    Danvy, Olivier

    2011-01-01

    This tech talk describes how to write and how to inter-derive formal semantics for sequential programming languages. The progress reported here is (1) concrete guidelines to write each formal semantics to alleviate their proof obligations, and (2) simple calculational tools to obtain a formal...

  11. Computational logic: its origins and applications.

    Science.gov (United States)

    Paulson, Lawrence C

    2018-02-01

    Computational logic is the use of computers to establish facts in a logical formalism. Originating in nineteenth century attempts to understand the nature of mathematical reasoning, the subject now comprises a wide variety of formalisms, techniques and technologies. One strand of work follows the 'logic for computable functions (LCF) approach' pioneered by Robin Milner, where proofs can be constructed interactively or with the help of users' code (which does not compromise correctness). A refinement of LCF, called Isabelle, retains these advantages while providing flexibility in the choice of logical formalism and much stronger automation. The main application of these techniques has been to prove the correctness of hardware and software systems, but increasingly researchers have been applying them to mathematics itself.

  12. I/O-Efficient Computation of Water Flow Across a Terrain

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Revsbæk, Morten; Zeh, Norbert

    2010-01-01

    ). We present an I/O-efficient algorithm that solves this problem using O(sort(X) log (X/M) + sort(N)) I/Os, where N is the number of terrain vertices, X is the number of pits of the terrain, sort(N) is the cost of sorting N data items, and M is the size of the computer's main memory. Our algorithm...

  13. Efficient Computation of Casimir Interactions between Arbitrary 3D Objects

    International Nuclear Information System (INIS)

    Reid, M. T. Homer; Rodriguez, Alejandro W.; White, Jacob; Johnson, Steven G.

    2009-01-01

    We introduce an efficient technique for computing Casimir energies and forces between objects of arbitrarily complex 3D geometries. In contrast to other recently developed methods, our technique easily handles nonspheroidal, nonaxisymmetric objects, and objects with sharp corners. Using our new technique, we obtain the first predictions of Casimir interactions in a number of experimentally relevant geometries, including crossed cylinders and tetrahedral nanoparticles.

  14. A strategy for improved computational efficiency of the method of anchored distributions

    Science.gov (United States)

    Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram

    2013-06-01

    This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.

  15. Computationally efficient implementation of combustion chemistry in parallel PDF calculations

    International Nuclear Information System (INIS)

    Lu Liuyan; Lantz, Steven R.; Ren Zhuyin; Pope, Stephen B.

    2009-01-01

    In parallel calculations of combustion processes with realistic chemistry, the serial in situ adaptive tabulation (ISAT) algorithm [S.B. Pope, Computationally efficient implementation of combustion chemistry using in situ adaptive tabulation, Combustion Theory and Modelling, 1 (1997) 41-63; L. Lu, S.B. Pope, An improved algorithm for in situ adaptive tabulation, Journal of Computational Physics 228 (2009) 361-386] substantially speeds up the chemistry calculations on each processor. To improve the parallel efficiency of large ensembles of such calculations in parallel computations, in this work, the ISAT algorithm is extended to the multi-processor environment, with the aim of minimizing the wall clock time required for the whole ensemble. Parallel ISAT strategies are developed by combining the existing serial ISAT algorithm with different distribution strategies, namely purely local processing (PLP), uniformly random distribution (URAN), and preferential distribution (PREF). The distribution strategies enable the queued load redistribution of chemistry calculations among processors using message passing. They are implemented in the software x2f m pi, which is a Fortran 95 library for facilitating many parallel evaluations of a general vector function. The relative performance of the parallel ISAT strategies is investigated in different computational regimes via the PDF calculations of multiple partially stirred reactors burning methane/air mixtures. The results show that the performance of ISAT with a fixed distribution strategy strongly depends on certain computational regimes, based on how much memory is available and how much overlap exists between tabulated information on different processors. No one fixed strategy consistently achieves good performance in all the regimes. Therefore, an adaptive distribution strategy, which blends PLP, URAN and PREF, is devised and implemented. It yields consistently good performance in all regimes. In the adaptive parallel

  16. Industrial use of formal methods formal verification

    CERN Document Server

    Boulanger, Jean-Louis

    2012-01-01

    At present the literature gives students and researchers of the very general books on the formal technics. The purpose of this book is to present in a single book, a return of experience on the used of the "formal technics" (such proof and model-checking) on industrial examples for the transportation domain. This book is based on the experience of people which are completely involved in the realization and the evaluation of safety critical system software based.  The implication of the industrialists allows to raise the problems of confidentiality which could appear and so allow

  17. Noncommutativity and Duality through the Symplectic Embedding Formalism

    Directory of Open Access Journals (Sweden)

    Everton M.C. Abreu

    2010-07-01

    Full Text Available This work is devoted to review the gauge embedding of either commutative and noncommutative (NC theories using the symplectic formalism framework. To sum up the main features of the method, during the process of embedding, the infinitesimal gauge generators of the gauge embedded theory are easily and directly chosen. Among other advantages, this enables a greater control over the final Lagrangian and brings some light on the so-called ''arbitrariness problem''. This alternative embedding formalism also presents a way to obtain a set of dynamically dual equivalent embedded Lagrangian densities which is obtained after a finite number of steps in the iterative symplectic process, oppositely to the result proposed using the BFFT formalism. On the other hand, we will see precisely that the symplectic embedding formalism can be seen as an alternative and an efficient procedure to the standard introduction of the Moyal product in order to produce in a natural way a NC theory. In order to construct a pedagogical explanation of the method to the nonspecialist we exemplify the formalism showing that the massive NC U(1 theory is embedded in a gauge theory using this alternative systematic path based on the symplectic framework. Further, as other applications of the method, we describe exactly how to obtain a Lagrangian description for the NC version of some systems reproducing well known theories. Naming some of them, we use the procedure in the Proca model, the irrotational fluid model and the noncommutative self-dual model in order to obtain dual equivalent actions for these theories. To illustrate the process of noncommutativity introduction we use the chiral oscillator and the nondegenerate mechanics.

  18. An efficient algorithm to compute subsets of points in ℤ n

    OpenAIRE

    Pacheco Martínez, Ana María; Real Jurado, Pedro

    2012-01-01

    In this paper we show a more efficient algorithm than that in [8] to compute subsets of points non-congruent by isometries. This algorithm can be used to reconstruct the object from the digital image. Both algorithms are compared, highlighting the improvements obtained in terms of CPU time.

  19. Formal Verification -26 ...

    Indian Academy of Sciences (India)

    by testing of the components and successful testing leads to the software being ... Formal verification is based on formal methods which are mathematically based ..... scenario under which a similar error could occur. There are various other ...

  20. Formalized 2003 European Guidelines on Cardiovascular Disease Prevention in Clinical Practice

    Czech Academy of Sciences Publication Activity Database

    Peleška, Jan; Anger, Z.; Buchtela, David; Tomečková, Marie; Veselý, Arnošt

    2004-01-01

    Roč. 25, - (2004), s. 444 ISSN 0195-668X. [ESC Congress 2004. 28.08.2004-01.09.2004, Munich] Institutional research plan: CEZ:AV0Z1030915 Keywords : formalized European guidelines on CVD prevention * computer GLIF model * decision algorithm Subject RIV: BD - Theory of Information

  1. Continuous-Variable Quantum Computation of Oracle Decision Problems

    Science.gov (United States)

    Adcock, Mark R. A.

    the relative performances of different choices of the encoding bases. We extend our formalism to include quantum algorithms in the continuously parameterized yet finite-dimensional Hilbert space of a coherent spin system. We show that the highest-squeezed spin state possible can be approximated by a superposition of two states thus transcending the usual model of using a single basis state as algorithm input. As a particular example, we show that the close Hadamard oracle-decision problem, which is related to the Hadamard codewords of digital communications theory, can be solved quantitatively more efficiently using this computational model than by any known classical algorithm.

  2. The rational parts of one-loop QCD amplitudes I: The general formalism

    International Nuclear Information System (INIS)

    Xiao Zhiguang; Yang Gang; Zhu Chuanjie

    2006-01-01

    A general formalism for computing only the rational parts of one-loop QCD amplitudes is developed. Starting from the Feynman integral representation of the one-loop amplitude, we use tensor reduction and recursive relations to compute the rational parts directly. Explicit formulas for the rational parts are given for all bubble and triangle integrals. Formulas are also given for box integrals up to two-mass-hard boxes which are the needed ingredients to compute up to 6-gluon QCD amplitudes. We use this method to compute explicitly the rational parts of the 5- and 6-gluon QCD amplitudes in two accompanying papers

  3. Quantum Computing and the Limits of the Efficiently Computable

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I'll discuss how computational complexity---the study of what can and can't be feasibly computed---has been interacting with physics in interesting and unexpected ways. I'll first give a crash course about computer science's P vs. NP problem, as well as about the capabilities and limits of quantum computers. I'll then touch on speculative models of computation that would go even beyond quantum computers, using (for example) hypothetical nonlinearities in the Schrodinger equation. Finally, I'll discuss BosonSampling ---a proposal for a simple form of quantum computing, which nevertheless seems intractable to simulate using a classical computer---as well as the role of computational complexity in the black hole information puzzle.

  4. Programming Unconventional Computers: Dynamics, Development, Self-Reference

    Directory of Open Access Journals (Sweden)

    Susan Stepney

    2012-10-01

    Full Text Available Classical computing has well-established formalisms for specifying, refining, composing, proving, and otherwise reasoning about computations. These formalisms have matured over the past 70 years or so. Unconventional Computing includes the use of novel kinds of substrates–from black holes and quantum effects, through to chemicals, biomolecules, even slime moulds–to perform computations that do not conform to the classical model. Although many of these unconventional substrates can be coerced into performing classical computation, this is not how they “naturally” compute. Our ability to exploit unconventional computing is partly hampered by a lack of corresponding programming formalisms: we need models for building, composing, and reasoning about programs that execute in these substrates. What might, say, a slime mould programming language look like? Here I outline some of the issues and properties of these unconventional substrates that need to be addressed to find “natural” approaches to programming them. Important concepts include embodied real values, processes and dynamical systems, generative systems and their meta-dynamics, and embodied self-reference.

  5. Efficient implementation of multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    Science.gov (United States)

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2012-01-10

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  6. Practice-Oriented Formal Methods to Support the Software Development of Industrial Control Systems

    CERN Document Server

    AUTHOR|(CDS)2088632; Blanco Viñuela, Enrique

    Formal specification and verification methods provide ways to describe requirements precisely and to check whether the requirements are satisfied by the design or the implementation. In other words, they can prevent development faults and therefore improve the quality of the developed systems. These methods are part of the state-of-the-practice in application domains with high criticality, such as avionics, railway or nuclear industry. The situation is different in the industrial control systems domain. As the criticality of the systems is much lower, formal methods are rarely used. The two main obstacles to using formal methods in systems with low- or medium-criticality are performance and usability. Overcoming these obstacles often needs deep knowledge and high effort. Model checking, one of the main formal verification techniques, is computationally difficult, therefore the analysis of non-trivial systems requires special considerations. Furthermore, the mainly academic tools implementing different model c...

  7. Efficient Skyline Computation in Structured Peer-to-Peer Systems

    DEFF Research Database (Denmark)

    Cui, Bin; Chen, Lijiang; Xu, Linhao

    2009-01-01

    An increasing number of large-scale applications exploit peer-to-peer network architecture to provide highly scalable and flexible services. Among these applications, data management in peer-to-peer systems is one of the interesting domains. In this paper, we investigate the multidimensional...... skyline computation problem on a structured peer-to-peer network. In order to achieve low communication cost and quick response time, we utilize the iMinMax(\\theta ) method to transform high-dimensional data to one-dimensional value and distribute the data in a structured peer-to-peer network called BATON....... Thereafter, we propose a progressive algorithm with adaptive filter technique for efficient skyline computation in this environment. We further discuss some optimization techniques for the algorithm, and summarize the key principles of our algorithm into a query routing protocol with detailed analysis...

  8. Energy Efficiency Collaboratives

    Energy Technology Data Exchange (ETDEWEB)

    Li, Michael [US Department of Energy, Washington, DC (United States); Bryson, Joe [US Environmental Protection Agency, Washington, DC (United States)

    2015-09-01

    Collaboratives for energy efficiency have a long and successful history and are currently used, in some form, in more than half of the states. Historically, many state utility commissions have used some form of collaborative group process to resolve complex issues that emerge during a rate proceeding. Rather than debate the issues through the formality of a commission proceeding, disagreeing parties are sent to discuss issues in a less-formal setting and bring back resolutions to the commission. Energy efficiency collaboratives take this concept and apply it specifically to energy efficiency programs—often in anticipation of future issues as opposed to reacting to a present disagreement. Energy efficiency collaboratives can operate long term and can address the full suite of issues associated with designing, implementing, and improving energy efficiency programs. Collaboratives can be useful to gather stakeholder input on changing program budgets and program changes in response to performance or market shifts, as well as to provide continuity while regulators come and go, identify additional energy efficiency opportunities and innovations, assess the role of energy efficiency in new regulatory contexts, and draw on lessons learned and best practices from a diverse group. Details about specific collaboratives in the United States are in the appendix to this guide. Collectively, they demonstrate the value of collaborative stakeholder processes in producing successful energy efficiency programs.

  9. Gait analysis using computer vision for the early detection of elderly syndromes. A formal proposal

    OpenAIRE

    Nieto Hidalgo, Mario

    2016-01-01

    El objetivo principal de esta tesis es el desarrollo de un sistema de análisis de la marcha basado en visión que permita clasificar la marcha patológica. Este objetivo general se divide en tres subobjetivos específicos más concretos: definición formal de la marcha, especificación e implementación de un sistema de obtención de parámetros de la marcha basado en visión y clasificación de la marcha patológica. En el primer subobjetivo, definición formal de la marcha, nuestros esfuerzos consisten ...

  10. Algorithmic Mechanism Design of Evolutionary Computation.

    Science.gov (United States)

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm.

  11. Developing a computationally efficient dynamic multilevel hybrid optimization scheme using multifidelity model interactions.

    Energy Technology Data Exchange (ETDEWEB)

    Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Castro, Joseph Pete Jr. (; .); Giunta, Anthony Andrew

    2006-01-01

    Many engineering application problems use optimization algorithms in conjunction with numerical simulators to search for solutions. The formulation of relevant objective functions and constraints dictate possible optimization algorithms. Often, a gradient based approach is not possible since objective functions and constraints can be nonlinear, nonconvex, non-differentiable, or even discontinuous and the simulations involved can be computationally expensive. Moreover, computational efficiency and accuracy are desirable and also influence the choice of solution method. With the advent and increasing availability of massively parallel computers, computational speed has increased tremendously. Unfortunately, the numerical and model complexities of many problems still demand significant computational resources. Moreover, in optimization, these expenses can be a limiting factor since obtaining solutions often requires the completion of numerous computationally intensive simulations. Therefore, we propose a multifidelity optimization algorithm (MFO) designed to improve the computational efficiency of an optimization method for a wide range of applications. In developing the MFO algorithm, we take advantage of the interactions between multi fidelity models to develop a dynamic and computational time saving optimization algorithm. First, a direct search method is applied to the high fidelity model over a reduced design space. In conjunction with this search, a specialized oracle is employed to map the design space of this high fidelity model to that of a computationally cheaper low fidelity model using space mapping techniques. Then, in the low fidelity space, an optimum is obtained using gradient or non-gradient based optimization, and it is mapped back to the high fidelity space. In this paper, we describe the theory and implementation details of our MFO algorithm. We also demonstrate our MFO method on some example problems and on two applications: earth penetrators and

  12. Investigating the Multi-memetic Mind Evolutionary Computation Algorithm Efficiency

    Directory of Open Access Journals (Sweden)

    M. K. Sakharov

    2017-01-01

    Full Text Available In solving practically significant problems of global optimization, the objective function is often of high dimensionality and computational complexity and of nontrivial landscape as well. Studies show that often one optimization method is not enough for solving such problems efficiently - hybridization of several optimization methods is necessary.One of the most promising contemporary trends in this field are memetic algorithms (MA, which can be viewed as a combination of the population-based search for a global optimum and the procedures for a local refinement of solutions (memes, provided by a synergy. Since there are relatively few theoretical studies concerning the MA configuration, which is advisable for use to solve the black-box optimization problems, many researchers tend just to adaptive algorithms, which for search select the most efficient methods of local optimization for the certain domains of the search space.The article proposes a multi-memetic modification of a simple SMEC algorithm, using random hyper-heuristics. Presents the software algorithm and memes used (Nelder-Mead method, method of random hyper-sphere surface search, Hooke-Jeeves method. Conducts a comparative study of the efficiency of the proposed algorithm depending on the set and the number of memes. The study has been carried out using Rastrigin, Rosenbrock, and Zakharov multidimensional test functions. Computational experiments have been carried out for all possible combinations of memes and for each meme individually.According to results of study, conducted by the multi-start method, the combinations of memes, comprising the Hooke-Jeeves method, were successful. These results prove a rapid convergence of the method to a local optimum in comparison with other memes, since all methods perform the fixed number of iterations at the most.The analysis of the average number of iterations shows that using the most efficient sets of memes allows us to find the optimal

  13. Computer-aided System of Semantic Text Analysis of a Technical Specification

    OpenAIRE

    Zaboleeva-Zotova, Alla; Orlova, Yulia

    2008-01-01

    The given work is devoted to development of the computer-aided system of semantic text analysis of a technical specification. The purpose of this work is to increase efficiency of software engineering based on automation of semantic text analysis of a technical specification. In work it is offered and investigated the model of the analysis of the text of the technical project is submitted, the attribute grammar of a technical specification, intended for formalization of limited Ru...

  14. A Comparison of Participation Patterns in Selected Formal, Non-Formal, and Informal Online Learning Environments

    Science.gov (United States)

    Schwier, Richard A.; Seaton, J. X.

    2013-01-01

    Does learner participation vary depending on the learning context? Are there characteristic features of participation evident in formal, non-formal, and informal online learning environments? Six online learning environments were chosen as epitomes of formal, non-formal, and informal learning contexts and compared. Transcripts of online…

  15. Improving Learner Outcomes in Lifelong Education: Formal Pedagogies in Non-Formal Learning Contexts?

    Science.gov (United States)

    Zepke, Nick; Leach, Linda

    2006-01-01

    This article explores how far research findings about successful pedagogies in formal post-school education might be used in non-formal learning contexts--settings where learning may not lead to formal qualifications. It does this by examining a learner outcomes model adapted from a synthesis of research into retention. The article first…

  16. Towards a Formal Ontology of Information. Selected Ideas of K. Turek

    Directory of Open Access Journals (Sweden)

    Roman Krzanowski

    2016-12-01

    Full Text Available There are many ontologies of the world or of specific phenomena such as time, matter, space, and quantum mechanics1. However, ontologies of information are rather rare. One of the reasons behind this is that information is most frequently associated with communication and computing, and not with ‘the furniture of the world’. But what would be the nature of an ontology of information? For it to be of significant import it should be amenable to formalization in a logico-grammatical formalism. A candidate ontology satisfying such a requirement can be found in some of the ideas of K. Turek, presented in this paper. Turek outlines the ontology of information conceived of as a part of nature, and provides the ‘missing link’ to the Z axiomatic set theory, offering a proposal for developing a formal ontology of information both in its philosophical and logicogrammatical representations.

  17. Full-zone spectral envelope function formalism for the optimization of line and point tunnel field-effect transistors

    Energy Technology Data Exchange (ETDEWEB)

    Verreck, Devin, E-mail: devin.verreck@imec.be; Groeseneken, Guido [imec, Kapeldreef 75, 3001 Leuven (Belgium); Department of Electrical Engineering, KU Leuven, 3001 Leuven (Belgium); Verhulst, Anne S.; Mocuta, Anda; Collaert, Nadine; Thean, Aaron [imec, Kapeldreef 75, 3001 Leuven (Belgium); Van de Put, Maarten; Magnus, Wim [imec, Kapeldreef 75, 3001 Leuven (Belgium); Department of Physics, Universiteit Antwerpen, 2020 Antwerpen (Belgium); Sorée, Bart [imec, Kapeldreef 75, 3001 Leuven (Belgium); Department of Physics, Universiteit Antwerpen, 2020 Antwerpen (Belgium); Department of Electrical Engineering, KU Leuven, 3001 Leuven (Belgium)

    2015-10-07

    Efficient quantum mechanical simulation of tunnel field-effect transistors (TFETs) is indispensable to allow for an optimal configuration identification. We therefore present a full-zone 15-band quantum mechanical solver based on the envelope function formalism and employing a spectral method to reduce computational complexity and handle spurious solutions. We demonstrate the versatility of the solver by simulating a 40 nm wide In{sub 0.53}Ga{sub 0.47}As lineTFET and comparing it to p-n-i-n configurations with various pocket and body thicknesses. We find that the lineTFET performance is not degraded compared to semi-classical simulations. Furthermore, we show that a suitably optimized p-n-i-n TFET can obtain similar performance to the lineTFET.

  18. A Cloud Computing-Enabled Spatio-Temporal Cyber-Physical Information Infrastructure for Efficient Soil Moisture Monitoring

    Directory of Open Access Journals (Sweden)

    Lianjie Zhou

    2016-06-01

    Full Text Available Comprehensive surface soil moisture (SM monitoring is a vital task in precision agriculture applications. SM monitoring includes remote sensing imagery monitoring and in situ sensor-based observational monitoring. Cloud computing can increase computational efficiency enormously. A geographical web service was developed to assist in agronomic decision making, and this tool can be scaled to any location and crop. By integrating cloud computing and the web service-enabled information infrastructure, this study uses the cloud computing-enabled spatio-temporal cyber-physical infrastructure (CESCI to provide an efficient solution for soil moisture monitoring in precision agriculture. On the server side of CESCI, diverse Open Geospatial Consortium web services work closely with each other. Hubei Province, located on the Jianghan Plain in central China, is selected as the remote sensing study area in the experiment. The Baoxie scientific experimental field in Wuhan City is selected as the in situ sensor study area. The results show that the proposed method enhances the efficiency of remote sensing imagery mapping and in situ soil moisture interpolation. In addition, the proposed method is compared to other existing precision agriculture infrastructures. In this comparison, the proposed infrastructure performs soil moisture mapping in Hubei Province in 1.4 min and near real-time in situ soil moisture interpolation in an efficient manner. Moreover, an enhanced performance monitoring method can help to reduce costs in precision agriculture monitoring, as well as increasing agricultural productivity and farmers’ net-income.

  19. An efficient and general numerical method to compute steady uniform vortices

    Science.gov (United States)

    Luzzatto-Fegiz, Paolo; Williamson, Charles H. K.

    2011-07-01

    Steady uniform vortices are widely used to represent high Reynolds number flows, yet their efficient computation still presents some challenges. Existing Newton iteration methods become inefficient as the vortices develop fine-scale features; in addition, these methods cannot, in general, find solutions with specified Casimir invariants. On the other hand, available relaxation approaches are computationally inexpensive, but can fail to converge to a solution. In this paper, we overcome these limitations by introducing a new discretization, based on an inverse-velocity map, which radically increases the efficiency of Newton iteration methods. In addition, we introduce a procedure to prescribe Casimirs and remove the degeneracies in the steady vorticity equation, thus ensuring convergence for general vortex configurations. We illustrate our methodology by considering several unbounded flows involving one or two vortices. Our method enables the computation, for the first time, of steady vortices that do not exhibit any geometric symmetry. In addition, we discover that, as the limiting vortex state for each flow is approached, each family of solutions traces a clockwise spiral in a bifurcation plot consisting of a velocity-impulse diagram. By the recently introduced "IVI diagram" stability approach [Phys. Rev. Lett. 104 (2010) 044504], each turn of this spiral is associated with a loss of stability for the steady flows. Such spiral structure is suggested to be a universal feature of steady, uniform-vorticity flows.

  20. Combining Formal, Non-Formal and Informal Learning for Workforce Skill Development

    Science.gov (United States)

    Misko, Josie

    2008-01-01

    This literature review, undertaken for Australian Industry Group, shows how multiple variations and combinations of formal, informal and non-formal learning, accompanied by various government incentives and organisational initiatives (including job redesign, cross-skilling, multi-skilling, diversified career pathways, action learning projects,…

  1. Graphics processor efficiency for realization of rapid tabular computations

    International Nuclear Information System (INIS)

    Dudnik, V.A.; Kudryavtsev, V.I.; Us, S.A.; Shestakov, M.V.

    2016-01-01

    Capabilities of graphics processing units (GPU) and central processing units (CPU) have been investigated for realization of fast-calculation algorithms with the use of tabulated functions. The realization of tabulated functions is exemplified by the GPU/CPU architecture-based processors. Comparison is made between the operating efficiencies of GPU and CPU, employed for tabular calculations at different conditions of use. Recommendations are formulated for the use of graphical and central processors to speed up scientific and engineering computations through the use of tabulated functions

  2. Efficient quantum algorithm for computing n-time correlation functions.

    Science.gov (United States)

    Pedernales, J S; Di Candia, R; Egusquiza, I L; Casanova, J; Solano, E

    2014-07-11

    We propose a method for computing n-time correlation functions of arbitrary spinorial, fermionic, and bosonic operators, consisting of an efficient quantum algorithm that encodes these correlations in an initially added ancillary qubit for probe and control tasks. For spinorial and fermionic systems, the reconstruction of arbitrary n-time correlation functions requires the measurement of two ancilla observables, while for bosonic variables time derivatives of the same observables are needed. Finally, we provide examples applicable to different quantum platforms in the frame of the linear response theory.

  3. A-VCI: A flexible method to efficiently compute vibrational spectra

    Science.gov (United States)

    Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier

    2017-06-01

    The adaptive vibrational configuration interaction algorithm has been introduced as a new method to efficiently reduce the dimension of the set of basis functions used in a vibrational configuration interaction process. It is based on the construction of nested bases for the discretization of the Hamiltonian operator according to a theoretical criterion that ensures the convergence of the method. In the present work, the Hamiltonian is written as a sum of products of operators. The purpose of this paper is to study the properties and outline the performance details of the main steps of the algorithm. New parameters have been incorporated to increase flexibility, and their influence has been thoroughly investigated. The robustness and reliability of the method are demonstrated for the computation of the vibrational spectrum up to 3000 cm-1 of a widely studied 6-atom molecule (acetonitrile). Our results are compared to the most accurate up to date computation; we also give a new reference calculation for future work on this system. The algorithm has also been applied to a more challenging 7-atom molecule (ethylene oxide). The computed spectrum up to 3200 cm-1 is the most accurate computation that exists today on such systems.

  4. Formalization of the engineering science discipline - knowledge engineering

    Science.gov (United States)

    Peng, Xiao

    Knowledge is the most precious ingredient facilitating aerospace engineering research and product development activities. Currently, the most common knowledge retention methods are paper-based documents, such as reports, books and journals. However, those media have innate weaknesses. For example, four generations of flying wing aircraft (Horten, Northrop XB-35/YB-49, Boeing BWB and many others) were mostly developed in isolation. The subsequent engineers were not aware of the previous developments, because these projects were documented such which prevented the next generation of engineers to benefit from the previous lessons learned. In this manner, inefficient knowledge retention methods have become a primary obstacle for knowledge transfer from the experienced to the next generation of engineers. In addition, the quality of knowledge itself is a vital criterion; thus, an accurate measure of the quality of 'knowledge' is required. Although qualitative knowledge evaluation criteria have been researched in other disciplines, such as the AAA criterion by Ernest Sosa stemming from the field of philosophy, a quantitative knowledge evaluation criterion needs to be developed which is capable to numerically determine the qualities of knowledge for aerospace engineering research and product development activities. To provide engineers with a high-quality knowledge management tool, the engineering science discipline Knowledge Engineering has been formalized to systematically address knowledge retention issues. This research undertaking formalizes Knowledge Engineering as follows: 1. Categorize knowledge according to its formats and representations for the first time, which serves as the foundation for the subsequent knowledge management function development. 2. Develop an efficiency evaluation criterion for knowledge management by analyzing the characteristics of both knowledge and the parties involved in the knowledge management processes. 3. Propose and develop an

  5. Infinities in Quantum Field Theory and in Classical Computing: Renormalization Program

    Science.gov (United States)

    Manin, Yuri I.

    Introduction. The main observable quantities in Quantum Field Theory, correlation functions, are expressed by the celebrated Feynman path integrals. A mathematical definition of them involving a measure and actual integration is still lacking. Instead, it is replaced by a series of ad hoc but highly efficient and suggestive heuristic formulas such as perturbation formalism. The latter interprets such an integral as a formal series of finite-dimensional but divergent integrals, indexed by Feynman graphs, the list of which is determined by the Lagrangian of the theory. Renormalization is a prescription that allows one to systematically "subtract infinities" from these divergent terms producing an asymptotic series for quantum correlation functions. On the other hand, graphs treated as "flowcharts", also form a combinatorial skeleton of the abstract computation theory. Partial recursive functions that according to Church's thesis exhaust the universe of (semi)computable maps are generally not everywhere defined due to potentially infinite searches and loops. In this paper I argue that such infinities can be addressed in the same way as Feynman divergences. More details can be found in [9,10].

  6. Applications of a formal approach to decipher discrete genetic networks.

    Science.gov (United States)

    Corblin, Fabien; Fanchon, Eric; Trilling, Laurent

    2010-07-20

    A growing demand for tools to assist the building and analysis of biological networks exists in systems biology. We argue that the use of a formal approach is relevant and applicable to address questions raised by biologists about such networks. The behaviour of these systems being complex, it is essential to exploit efficiently every bit of experimental information. In our approach, both the evolution rules and the partial knowledge about the structure and the behaviour of the network are formalized using a common constraint-based language. In this article our formal and declarative approach is applied to three biological applications. The software environment that we developed allows to specifically address each application through a new class of biologically relevant queries. We show that we can describe easily and in a formal manner the partial knowledge about a genetic network. Moreover we show that this environment, based on a constraint algorithmic approach, offers a wide variety of functionalities, going beyond simple simulations, such as proof of consistency, model revision, prediction of properties, search for minimal models relatively to specified criteria. The formal approach proposed here deeply changes the way to proceed in the exploration of genetic and biochemical networks, first by avoiding the usual trial-and-error procedure, and second by placing the emphasis on sets of solutions, rather than a single solution arbitrarily chosen among many others. Last, the constraint approach promotes an integration of model and experimental data in a single framework.

  7. An efficient hysteresis modeling methodology and its implementation in field computation applications

    Energy Technology Data Exchange (ETDEWEB)

    Adly, A.A., E-mail: adlyamr@gmail.com [Electrical Power and Machines Dept., Faculty of Engineering, Cairo University, Giza 12613 (Egypt); Abd-El-Hafiz, S.K. [Engineering Mathematics Department, Faculty of Engineering, Cairo University, Giza 12613 (Egypt)

    2017-07-15

    Highlights: • An approach to simulate hysteresis while taking shape anisotropy into consideration. • Utilizing the ensemble of triangular sub-regions hysteresis models in field computation. • A novel tool capable of carrying out field computation while keeping track of hysteresis losses. • The approach may be extended for 3D tetra-hedra sub-volumes. - Abstract: Field computation in media exhibiting hysteresis is crucial to a variety of applications such as magnetic recording processes and accurate determination of core losses in power devices. Recently, Hopfield neural networks (HNN) have been successfully configured to construct scalar and vector hysteresis models. This paper presents an efficient hysteresis modeling methodology and its implementation in field computation applications. The methodology is based on the application of the integral equation approach on discretized triangular magnetic sub-regions. Within every triangular sub-region, hysteresis properties are realized using a 3-node HNN. Details of the approach and sample computation results are given in the paper.

  8. High-efficiency photorealistic computer-generated holograms based on the backward ray-tracing technique

    Science.gov (United States)

    Wang, Yuan; Chen, Zhidong; Sang, Xinzhu; Li, Hui; Zhao, Linmin

    2018-03-01

    Holographic displays can provide the complete optical wave field of a three-dimensional (3D) scene, including the depth perception. However, it often takes a long computation time to produce traditional computer-generated holograms (CGHs) without more complex and photorealistic rendering. The backward ray-tracing technique is able to render photorealistic high-quality images, which noticeably reduce the computation time achieved from the high-degree parallelism. Here, a high-efficiency photorealistic computer-generated hologram method is presented based on the ray-tracing technique. Rays are parallelly launched and traced under different illuminations and circumstances. Experimental results demonstrate the effectiveness of the proposed method. Compared with the traditional point cloud CGH, the computation time is decreased to 24 s to reconstruct a 3D object of 100 ×100 rays with continuous depth change.

  9. Beyond formalism

    Science.gov (United States)

    Denning, Peter J.

    1991-01-01

    The ongoing debate over the role of formalism and formal specifications in software features many speakers with diverse positions. Yet, in the end, they share the conviction that the requirements of a software system can be unambiguously specified, that acceptable software is a product demonstrably meeting the specifications, and that the design process can be carried out with little interaction between designers and users once the specification has been agreed to. This conviction is part of a larger paradigm prevalent in American management thinking, which holds that organizations are systems that can be precisely specified and optimized. This paradigm, which traces historically to the works of Frederick Taylor in the early 1900s, is no longer sufficient for organizations and software systems today. In the domain of software, a new paradigm, called user-centered design, overcomes the limitations of pure formalism. Pioneered in Scandinavia, user-centered design is spreading through Europe and is beginning to make its way into the U.S.

  10. Improving Computational Efficiency of Prediction in Model-Based Prognostics Using the Unscented Transform

    Science.gov (United States)

    Daigle, Matthew John; Goebel, Kai Frank

    2010-01-01

    Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.

  11. A flexible framework for secure and efficient program obfuscation.

    Energy Technology Data Exchange (ETDEWEB)

    Solis, John Hector

    2013-03-01

    In this paper, we present a modular framework for constructing a secure and efficient program obfuscation scheme. Our approach, inspired by the obfuscation with respect to oracle machines model of [4], retains an interactive online protocol with an oracle, but relaxes the original computational and storage restrictions. We argue this is reasonable given the computational resources of modern personal devices. Furthermore, we relax the information-theoretic security requirement for computational security to utilize established cryptographic primitives. With this additional flexibility we are free to explore different cryptographic buildingblocks. Our approach combines authenticated encryption with private information retrieval to construct a secure program obfuscation framework. We give a formal specification of our framework, based on desired functionality and security properties, and provide an example instantiation. In particular, we implement AES in Galois/Counter Mode for authenticated encryption and the Gentry-Ramzan [13]constant communication-rate private information retrieval scheme. We present our implementation results and show that non-trivial sized programs can be realized, but scalability is quickly limited by computational overhead. Finally, we include a discussion on security considerations when instantiating specific modules.

  12. Computationally Efficient and Noise Robust DOA and Pitch Estimation

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2016-01-01

    Many natural signals, such as voiced speech and some musical instruments, are approximately periodic over short intervals. These signals are often described in mathematics by the sum of sinusoids (harmonics) with frequencies that are proportional to the fundamental frequency, or pitch. In sensor...... a joint DOA and pitch estimator. In white Gaussian noise, we derive even more computationally efficient solutions which are designed using the narrowband power spectrum of the harmonics. Numerical results reveal the performance of the estimators in colored noise compared with the Cram\\'{e}r-Rao lower...

  13. Integrated formal operations plan

    Energy Technology Data Exchange (ETDEWEB)

    Cort, G.; Dearholt, W.; Donahue, S.; Frank, J.; Perkins, B.; Tyler, R.; Wrye, J.

    1994-01-05

    The concept of formal operations (that is, a collection of business practices to assure effective, accountable operations) has vexed the Laboratory for many years. To date most attempts at developing such programs have been based upon rigid, compliance-based interpretations of a veritable mountain of Department of Energy (DOE) orders, directives, notices, and standards. These DOE dictates seldom take the broad view but focus on highly specialized programs isolated from the overall context of formal operations. The result is a confusing array of specific, and often contradictory, requirements that produce a patchwork of overlapping niche programs. This unnecessary duplication wastes precious resources, dramatically increases the complexity of our work processes, and communicates a sense of confusion to our customers and regulators. Coupled with the artificial divisions that have historically existed among the Laboratory`s formal operations organizations (quality assurance, configuration management, records management, training, etc.), this approach has produced layers of increasingly vague and complex formal operations plans, each of which interprets its parent and adds additional requirements of its own. Organizational gridlock ensues whenever an activity attempts to implement these bureaucratic monstrosities. The integrated formal operations plan presented is to establish a set of requirements that must be met by an integrated formal operations program, assign responsibilities for implementation and operation of the program, and specify criteria against which the performance of the program will be measured. The accountable line manager specifies the items, processes, and information (the controlled elements) to which the formal operations program specified applies. The formal operations program is implemented using a graded approach based on the level of importance of the various controlled elements and the scope of the activities in which they are involved.

  14. PVT: an efficient computational procedure to speed up next-generation sequence analysis.

    Science.gov (United States)

    Maji, Ranjan Kumar; Sarkar, Arijita; Khatua, Sunirmal; Dasgupta, Subhasis; Ghosh, Zhumur

    2014-06-04

    High-throughput Next-Generation Sequencing (NGS) techniques are advancing genomics and molecular biology research. This technology generates substantially large data which puts up a major challenge to the scientists for an efficient, cost and time effective solution to analyse such data. Further, for the different types of NGS data, there are certain common challenging steps involved in analysing those data. Spliced alignment is one such fundamental step in NGS data analysis which is extremely computational intensive as well as time consuming. There exists serious problem even with the most widely used spliced alignment tools. TopHat is one such widely used spliced alignment tools which although supports multithreading, does not efficiently utilize computational resources in terms of CPU utilization and memory. Here we have introduced PVT (Pipelined Version of TopHat) where we take up a modular approach by breaking TopHat's serial execution into a pipeline of multiple stages, thereby increasing the degree of parallelization and computational resource utilization. Thus we address the discrepancies in TopHat so as to analyze large NGS data efficiently. We analysed the SRA dataset (SRX026839 and SRX026838) consisting of single end reads and SRA data SRR1027730 consisting of paired-end reads. We used TopHat v2.0.8 to analyse these datasets and noted the CPU usage, memory footprint and execution time during spliced alignment. With this basic information, we designed PVT, a pipelined version of TopHat that removes the redundant computational steps during 'spliced alignment' and breaks the job into a pipeline of multiple stages (each comprising of different step(s)) to improve its resource utilization, thus reducing the execution time. PVT provides an improvement over TopHat for spliced alignment of NGS data analysis. PVT thus resulted in the reduction of the execution time to ~23% for the single end read dataset. Further, PVT designed for paired end reads showed an

  15. Computer Learning Through Piaget's Eyes.

    Science.gov (United States)

    Huber, Leonard N.

    1985-01-01

    Discusses Piaget's pre-operational, concrete operational, and formal operational stages and shows how this information sheds light on how children approach computers and computing, particularly with the LOGO programming language. (JN)

  16. Bridging the gap between formal and experience-based knowledge for context-aware laparoscopy.

    Science.gov (United States)

    Katić, Darko; Schuck, Jürgen; Wekerle, Anna-Laura; Kenngott, Hannes; Müller-Stich, Beat Peter; Dillmann, Rüdiger; Speidel, Stefanie

    2016-06-01

    Computer assistance is increasingly common in surgery. However, the amount of information is bound to overload processing abilities of surgeons. We propose methods to recognize the current phase of a surgery for context-aware information filtering. The purpose is to select the most suitable subset of information for surgical situations which require special assistance. We combine formal knowledge, represented by an ontology, and experience-based knowledge, represented by training samples, to recognize phases. For this purpose, we have developed two different methods. Firstly, we use formal knowledge about possible phase transitions to create a composition of random forests. Secondly, we propose a method based on cultural optimization to infer formal rules from experience to recognize phases. The proposed methods are compared with a purely formal knowledge-based approach using rules and a purely experience-based one using regular random forests. The comparative evaluation on laparoscopic pancreas resections and adrenalectomies employs a consistent set of quality criteria on clean and noisy input. The rule-based approaches proved best with noisefree data. The random forest-based ones were more robust in the presence of noise. Formal and experience-based knowledge can be successfully combined for robust phase recognition.

  17. Formal Analysis of Soft Errors using Theorem Proving

    Directory of Open Access Journals (Sweden)

    Sofiène Tahar

    2013-07-01

    Full Text Available Modeling and analysis of soft errors in electronic circuits has traditionally been done using computer simulations. Computer simulations cannot guarantee correctness of analysis because they utilize approximate real number representations and pseudo random numbers in the analysis and thus are not well suited for analyzing safety-critical applications. In this paper, we present a higher-order logic theorem proving based method for modeling and analysis of soft errors in electronic circuits. Our developed infrastructure includes formalized continuous random variable pairs, their Cumulative Distribution Function (CDF properties and independent standard uniform and Gaussian random variables. We illustrate the usefulness of our approach by modeling and analyzing soft errors in commonly used dynamic random access memory sense amplifier circuits.

  18. Evaluation of the efficiency of computer-aided spectra search systems based on information theory

    International Nuclear Information System (INIS)

    Schaarschmidt, K.

    1979-01-01

    Application of information theory allows objective evaluation of the efficiency of computer-aided spectra search systems. For this purpose, a significant number of search processes must be analyzed. The amount of information gained by computer application is considered as the difference between the entropy of the data bank and a conditional entropy depending on the proportion of unsuccessful search processes and ballast. The influence of the following factors can be estimated: volume, structure, and quality of the spectra collection stored, efficiency of the encoding instruction and the comparing algorithm, and subjective errors involved in the encoding of spectra. The relations derived are applied to two published storage and retrieval systems for infared spectra. (Auth.)

  19. A computationally efficient 3D finite-volume scheme for violent liquid–gas sloshing

    CSIR Research Space (South Africa)

    Oxtoby, Oliver F

    2015-10-01

    Full Text Available We describe a semi-implicit volume-of-fluid free-surface-modelling methodology for flow problems involving violent free-surface motion. For efficient computation, a hybrid-unstructured edge-based vertex-centred finite volume discretisation...

  20. Computability, complexity, logic

    CERN Document Server

    Börger, Egon

    1989-01-01

    The theme of this book is formed by a pair of concepts: the concept of formal language as carrier of the precise expression of meaning, facts and problems, and the concept of algorithm or calculus, i.e. a formally operating procedure for the solution of precisely described questions and problems. The book is a unified introduction to the modern theory of these concepts, to the way in which they developed first in mathematical logic and computability theory and later in automata theory, and to the theory of formal languages and complexity theory. Apart from considering the fundamental themes an

  1. Computational Properties of the Hippocampus Increase the Efficiency of Goal-Directed Foraging through Hierarchical Reinforcement Learning

    Directory of Open Access Journals (Sweden)

    Eric Chalmers

    2016-12-01

    Full Text Available The mammalian brain is thought to use a version of Model-based Reinforcement Learning (MBRL to guide goal-directed behavior, wherein animals consider goals and make plans to acquire desired outcomes. However, conventional MBRL algorithms do not fully explain animals’ ability to rapidly adapt to environmental changes, or learn multiple complex tasks. They also require extensive computation, suggesting that goal-directed behavior is cognitively expensive. We propose here that key features of processing in the hippocampus support a flexible MBRL mechanism for spatial navigation that is computationally efficient and can adapt quickly to change. We investigate this idea by implementing a computational MBRL framework that incorporates features inspired by computational properties of the hippocampus: a hierarchical representation of space, forward sweeps through future spatial trajectories, and context-driven remapping of place cells. We find that a hierarchical abstraction of space greatly reduces the computational load (mental effort required for adaptation to changing environmental conditions, and allows efficient scaling to large problems. It also allows abstract knowledge gained at high levels to guide adaptation to new obstacles. Moreover, a context-driven remapping mechanism allows learning and memory of multiple tasks. Simulating dorsal or ventral hippocampal lesions in our computational framework qualitatively reproduces behavioral deficits observed in rodents with analogous lesions. The framework may thus embody key features of how the brain organizes model-based RL to efficiently solve navigation and other difficult tasks.

  2. Integrating semi-formal and formal requirements

    NARCIS (Netherlands)

    Wieringa, Roelf J.; Olivé, Antoni; Dubois, Eric; Pastor, Joan Antoni; Huyts, Sander

    1997-01-01

    In this paper, we report on the integration of informal, semiformal and formal requirements specification techniques. We present a framework for requirements specification called TRADE, within which several well-known semiformal specification techniques are placed. TRADE is based on an analysis of

  3. Artificial grammar learning meets formal language theory: an overview

    Science.gov (United States)

    Fitch, W. Tecumseh; Friederici, Angela D.

    2012-01-01

    Formal language theory (FLT), part of the broader mathematical theory of computation, provides a systematic terminology and set of conventions for describing rules and the structures they generate, along with a rich body of discoveries and theorems concerning generative rule systems. Despite its name, FLT is not limited to human language, but is equally applicable to computer programs, music, visual patterns, animal vocalizations, RNA structure and even dance. In the last decade, this theory has been profitably used to frame hypotheses and to design brain imaging and animal-learning experiments, mostly using the ‘artificial grammar-learning’ paradigm. We offer a brief, non-technical introduction to FLT and then a more detailed analysis of empirical research based on this theory. We suggest that progress has been hampered by a pervasive conflation of distinct issues, including hierarchy, dependency, complexity and recursion. We offer clarifications of several relevant hypotheses and the experimental designs necessary to test them. We finally review the recent brain imaging literature, using formal languages, identifying areas of convergence and outstanding debates. We conclude that FLT has much to offer scientists who are interested in rigorous empirical investigations of human cognition from a neuroscientific and comparative perspective. PMID:22688631

  4. Computational Efficient Upscaling Methodology for Predicting Thermal Conductivity of Nuclear Waste forms

    International Nuclear Information System (INIS)

    Li, Dongsheng; Sun, Xin; Khaleel, Mohammad A.

    2011-01-01

    This study evaluated different upscaling methods to predict thermal conductivity in loaded nuclear waste form, a heterogeneous material system. The efficiency and accuracy of these methods were compared. Thermal conductivity in loaded nuclear waste form is an important property specific to scientific researchers, in waste form Integrated performance and safety code (IPSC). The effective thermal conductivity obtained from microstructure information and local thermal conductivity of different components is critical in predicting the life and performance of waste form during storage. How the heat generated during storage is directly related to thermal conductivity, which in turn determining the mechanical deformation behavior, corrosion resistance and aging performance. Several methods, including the Taylor model, Sachs model, self-consistent model, and statistical upscaling models were developed and implemented. Due to the absence of experimental data, prediction results from finite element method (FEM) were used as reference to determine the accuracy of different upscaling models. Micrographs from different loading of nuclear waste were used in the prediction of thermal conductivity. Prediction results demonstrated that in term of efficiency, boundary models (Taylor and Sachs model) are better than self consistent model, statistical upscaling method and FEM. Balancing the computation resource and accuracy, statistical upscaling is a computational efficient method in predicting effective thermal conductivity for nuclear waste form.

  5. Efficient implementation of a multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    Science.gov (United States)

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2008-01-01

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  6. Development of a computationally efficient algorithm for attitude estimation of a remote sensing satellite

    Science.gov (United States)

    Labibian, Amir; Bahrami, Amir Hossein; Haghshenas, Javad

    2017-09-01

    This paper presents a computationally efficient algorithm for attitude estimation of remote a sensing satellite. In this study, gyro, magnetometer, sun sensor and star tracker are used in Extended Kalman Filter (EKF) structure for the purpose of Attitude Determination (AD). However, utilizing all of the measurement data simultaneously in EKF structure increases computational burden. Specifically, assuming n observation vectors, an inverse of a 3n×3n matrix is required for gain calculation. In order to solve this problem, an efficient version of EKF, namely Murrell's version, is employed. This method utilizes measurements separately at each sampling time for gain computation. Therefore, an inverse of a 3n×3n matrix is replaced by an inverse of a 3×3 matrix for each measurement vector. Moreover, gyro drifts during the time can reduce the pointing accuracy. Therefore, a calibration algorithm is utilized for estimation of the main gyro parameters.

  7. Formal and Informal Continuing Education Activities and Athletic Training Professional Practice

    Science.gov (United States)

    Armstrong, Kirk J.; Weidner, Thomas G.

    2010-01-01

    Abstract Context: Continuing education (CE) is intended to promote professional growth and, ultimately, to enhance professional practice. Objective: To determine certified athletic trainers' participation in formal (ie, approved for CE credit) and informal (ie, not approved for CE credit) CE activities and the perceived effect these activities have on professional practice with regard to improving knowledge, clinical skills and abilities, attitudes toward patient care, and patient care itself. Design: Cross-sectional study. Setting: Athletic training practice settings. Patients or Other Participants: Of a geographic, stratified random sample of 1000 athletic trainers, 427 (42.7%) completed the survey. Main Outcome Measure(s): The Survey of Formal and Informal Athletic Training Continuing Education Activities was developed and administered electronically. The survey consisted of demographic characteristics and Likert-scale items regarding CE participation and perceived effect of CE on professional practice. Internal consistency of survey items was determined using the Cronbach α (α  =  0.945). Descriptive statistics were computed for all items. An analysis of variance and dependent t tests were calculated to determine differences among respondents' demographic characteristics and their participation in, and perceived effect of, CE activities. The α level was set at .05. Results: Respondents completed more informal CE activities than formal CE activities. Participation in informal CE activities included reading athletic training journals (75.4%), whereas formal CE activities included attending a Board of Certification–approved workshop, seminar, or professional conference not conducted by the National Athletic Trainers' Association or affiliates or committees (75.6%). Informal CE activities were perceived to improve clinical skills or abilities and attitudes toward patient care. Formal CE activities were perceived to enhance knowledge. Conclusions: More

  8. Method of Computer-aided Instruction in Situation Control Systems

    Directory of Open Access Journals (Sweden)

    Anatoliy O. Kargin

    2013-01-01

    Full Text Available The article considers the problem of computer-aided instruction in context-chain motivated situation control system of the complex technical system behavior. The conceptual and formal models of situation control with practical instruction are considered. Acquisition of new behavior knowledge is presented as structural changes in system memory in the form of situational agent set. Model and method of computer-aided instruction represent formalization, based on the nondistinct theories by physiologists and cognitive psychologists.The formal instruction model describes situation and reaction formation and dependence on different parameters, effecting education, such as the reinforcement value, time between the stimulus, action and the reinforcement. The change of the contextual link between situational elements when using is formalized.The examples and results of computer instruction experiments of the robot device “LEGO MINDSTORMS NXT”, equipped with ultrasonic distance, touch, light sensors.

  9. Efficient universal computing architectures for decoding neural activity.

    Directory of Open Access Journals (Sweden)

    Benjamin I Rapoport

    Full Text Available The ability to decode neural activity into meaningful control signals for prosthetic devices is critical to the development of clinically useful brain- machine interfaces (BMIs. Such systems require input from tens to hundreds of brain-implanted recording electrodes in order to deliver robust and accurate performance; in serving that primary function they should also minimize power dissipation in order to avoid damaging neural tissue; and they should transmit data wirelessly in order to minimize the risk of infection associated with chronic, transcutaneous implants. Electronic architectures for brain- machine interfaces must therefore minimize size and power consumption, while maximizing the ability to compress data to be transmitted over limited-bandwidth wireless channels. Here we present a system of extremely low computational complexity, designed for real-time decoding of neural signals, and suited for highly scalable implantable systems. Our programmable architecture is an explicit implementation of a universal computing machine emulating the dynamics of a network of integrate-and-fire neurons; it requires no arithmetic operations except for counting, and decodes neural signals using only computationally inexpensive logic operations. The simplicity of this architecture does not compromise its ability to compress raw neural data by factors greater than [Formula: see text]. We describe a set of decoding algorithms based on this computational architecture, one designed to operate within an implanted system, minimizing its power consumption and data transmission bandwidth; and a complementary set of algorithms for learning, programming the decoder, and postprocessing the decoded output, designed to operate in an external, nonimplanted unit. The implementation of the implantable portion is estimated to require fewer than 5000 operations per second. A proof-of-concept, 32-channel field-programmable gate array (FPGA implementation of this portion

  10. Formal methods and tools for the development of distributed and real time systems : Esprit Project 3096 (SPEC)

    NARCIS (Netherlands)

    Roever, de W.P.; Barringer, H.; Courcoubetis, C.; Gabbay, D.M.; Gerth, R.T.; Jonsson, B.; Pnueli, A.; Reed, M.; Sifakis, J.; Vytopil, J.; Wolper, P.

    1990-01-01

    The Basic Research Action No. 3096, Formal Methods snd Tools for the Development of Distributed and Real Time Systems, is funded in the Area of Computer Science, under the ESPRIT Programme of the European Community. The coordinating institution is the Department of Computing Science, Eindhoven

  11. Formal System Verification - Extension 2

    Science.gov (United States)

    2012-08-08

    vision of truly trustworthy systems has been to provide a formally verified microkernel basis. We have previously developed the seL4 microkernel...together with a formal proof (in the theorem prover Isabelle/HOL) of its functional correctness [6]. This means that all the behaviours of the seL4 C...source code are included in the high-level, formal specification of the kernel. This work enabled us to provide further formal guarantees about seL4 , in

  12. Psychologist in non-formal education

    OpenAIRE

    Pavićević Miljana S.

    2011-01-01

    Learning is not limited to school time. It starts at birth and continues throughout the entire life. Equally important as formal education there are also non-formal and informal education. Any kind of learning outside the traditional school can be called informal. However, it is not easy to define non-formal education because it is being described differently, for example as an education movement, process, system… Projects and programs implemented under the name of non-formal education are of...

  13. Dist-Orc: A Rewriting-based Distributed Implementation of Orc with Formal Analysis

    Directory of Open Access Journals (Sweden)

    José Meseguer

    2010-09-01

    Full Text Available Orc is a theory of orchestration of services that allows structured programming of distributed and timed computations. Several formal semantics have been proposed for Orc, including a rewriting logic semantics developed by the authors. Orc also has a fully fledged implementation in Java with functional programming features. However, as with descriptions of most distributed languages, there exists a fairly substantial gap between Orc's formal semantics and its implementation, in that: (i programs in Orc are not easily deployable in a distributed implementation just by using Orc's formal semantics, and (ii they are not readily formally analyzable at the level of a distributed Orc implementation. In this work, we overcome problems (i and (ii for Orc. Specifically, we describe an implementation technique based on rewriting logic and Maude that narrows this gap considerably. The enabling feature of this technique is Maude's support for external objects through TCP sockets. We describe how sockets are used to implement Orc site calls and returns, and to provide real-time timing information to Orc expressions and sites. We then show how Orc programs in the resulting distributed implementation can be formally analyzed at a reasonable level of abstraction by defining an abstract model of time and the socket communication infrastructure, and discuss the assumptions under which the analysis can be deemed correct. Finally, the distributed implementation and the formal analysis methodology are illustrated with a case study.

  14. Hierarchical Decompositions for the Computation of High-Dimensional Multivariate Normal Probabilities

    KAUST Repository

    Genton, Marc G.

    2017-09-07

    We present a hierarchical decomposition scheme for computing the n-dimensional integral of multivariate normal probabilities that appear frequently in statistics. The scheme exploits the fact that the formally dense covariance matrix can be approximated by a matrix with a hierarchical low rank structure. It allows the reduction of the computational complexity per Monte Carlo sample from O(n2) to O(mn+knlog(n/m)), where k is the numerical rank of off-diagonal matrix blocks and m is the size of small diagonal blocks in the matrix that are not well-approximated by low rank factorizations and treated as dense submatrices. This hierarchical decomposition leads to substantial efficiencies in multivariate normal probability computations and allows integrations in thousands of dimensions to be practical on modern workstations.

  15. Hierarchical Decompositions for the Computation of High-Dimensional Multivariate Normal Probabilities

    KAUST Repository

    Genton, Marc G.; Keyes, David E.; Turkiyyah, George

    2017-01-01

    We present a hierarchical decomposition scheme for computing the n-dimensional integral of multivariate normal probabilities that appear frequently in statistics. The scheme exploits the fact that the formally dense covariance matrix can be approximated by a matrix with a hierarchical low rank structure. It allows the reduction of the computational complexity per Monte Carlo sample from O(n2) to O(mn+knlog(n/m)), where k is the numerical rank of off-diagonal matrix blocks and m is the size of small diagonal blocks in the matrix that are not well-approximated by low rank factorizations and treated as dense submatrices. This hierarchical decomposition leads to substantial efficiencies in multivariate normal probability computations and allows integrations in thousands of dimensions to be practical on modern workstations.

  16. Interaction Entropy: A New Paradigm for Highly Efficient and Reliable Computation of Protein-Ligand Binding Free Energy.

    Science.gov (United States)

    Duan, Lili; Liu, Xiao; Zhang, John Z H

    2016-05-04

    Efficient and reliable calculation of protein-ligand binding free energy is a grand challenge in computational biology and is of critical importance in drug design and many other molecular recognition problems. The main challenge lies in the calculation of entropic contribution to protein-ligand binding or interaction systems. In this report, we present a new interaction entropy method which is theoretically rigorous, computationally efficient, and numerically reliable for calculating entropic contribution to free energy in protein-ligand binding and other interaction processes. Drastically different from the widely employed but extremely expensive normal mode method for calculating entropy change in protein-ligand binding, the new method calculates the entropic component (interaction entropy or -TΔS) of the binding free energy directly from molecular dynamics simulation without any extra computational cost. Extensive study of over a dozen randomly selected protein-ligand binding systems demonstrated that this interaction entropy method is both computationally efficient and numerically reliable and is vastly superior to the standard normal mode approach. This interaction entropy paradigm introduces a novel and intuitive conceptual understanding of the entropic effect in protein-ligand binding and other general interaction systems as well as a practical method for highly efficient calculation of this effect.

  17. Computationally efficient design of optimal output feedback strategies for controllable passive damping devices

    International Nuclear Information System (INIS)

    Kamalzare, Mahmoud; Johnson, Erik A; Wojtkiewicz, Steven F

    2014-01-01

    Designing control strategies for smart structures, such as those with semiactive devices, is complicated by the nonlinear nature of the feedback control, secondary clipping control and other additional requirements such as device saturation. The usual design approach resorts to large-scale simulation parameter studies that are computationally expensive. The authors have previously developed an approach for state-feedback semiactive clipped-optimal control design, based on a nonlinear Volterra integral equation that provides for the computationally efficient simulation of such systems. This paper expands the applicability of the approach by demonstrating that it can also be adapted to accommodate more realistic cases when, instead of full state feedback, only a limited set of noisy response measurements is available to the controller. This extension requires incorporating a Kalman filter (KF) estimator, which is linear, into the nominal model of the uncontrolled system. The efficacy of the approach is demonstrated by a numerical study of a 100-degree-of-freedom frame model, excited by a filtered Gaussian random excitation, with noisy acceleration sensor measurements to determine the semiactive control commands. The results show that the proposed method can improve computational efficiency by more than two orders of magnitude relative to a conventional solver, while retaining a comparable level of accuracy. Further, the proposed approach is shown to be similarly efficient for an extensive Monte Carlo simulation to evaluate the effects of sensor noise levels and KF tuning on the accuracy of the response. (paper)

  18. Topical Roots of Formal Dialectic

    NARCIS (Netherlands)

    Krabbe, Erik C. W.

    Formal dialectic has its roots in ancient dialectic. We can trace this influence in Charles Hamblin's book on fallacies, in which he introduced his first formal dialectical systems. Earlier, Paul Lorenzen proposed systems of dialogical logic, which were in fact formal dialectical systems avant la

  19. A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.

    Science.gov (United States)

    Wehner, M. F.; Oliker, L.; Shalf, J.

    2008-12-01

    Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.

  20. Quantum Analog Computing

    Science.gov (United States)

    Zak, M.

    1998-01-01

    Quantum analog computing is based upon similarity between mathematical formalism of quantum mechanics and phenomena to be computed. It exploits a dynamical convergence of several competing phenomena to an attractor which can represent an externum of a function, an image, a solution to a system of ODE, or a stochastic process.

  1. Does computer-aided surgical simulation improve efficiency in bimaxillary orthognathic surgery?

    Science.gov (United States)

    Schwartz, H C

    2014-05-01

    The purpose of this study was to compare the efficiency of bimaxillary orthognathic surgery using computer-aided surgical simulation (CASS), with cases planned using traditional methods. Total doctor time was used to measure efficiency. While costs vary widely in different localities and in different health schemes, time is a valuable and limited resource everywhere. For this reason, total doctor time is a more useful measure of efficiency than is cost. Even though we use CASS primarily for planning more complex cases at the present time, this study showed an average saving of 60min for each case. In the context of a department that performs 200 bimaxillary cases each year, this would represent a saving of 25 days of doctor time, if applied to every case. It is concluded that CASS offers great potential for improving efficiency when used in the planning of bimaxillary orthognathic surgery. It saves significant doctor time that can be applied to additional surgical work. Copyright © 2013 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  2. Lending Policies of Informal, Formal, and Semi-formal Lenders: Evidence from Vietnam

    NARCIS (Netherlands)

    Lensink, B.W.; Pham, T.T.T.

    2007-01-01

    This paper compares lending policies of formal, informal and semiformal lenders with respect to household lending in Vietnam. The analysis suggests that the probability of using formal or semiformal credit increases if borrowers provide collateral, a guarantor and/or borrow for business-related

  3. Products of composite operators in the exact renormalization group formalism

    Science.gov (United States)

    Pagani, C.; Sonoda, H.

    2018-02-01

    We discuss a general method of constructing the products of composite operators using the exact renormalization group formalism. Considering mainly the Wilson action at a generic fixed point of the renormalization group, we give an argument for the validity of short-distance expansions of operator products. We show how to compute the expansion coefficients by solving differential equations, and test our method with some simple examples.

  4. Efficient computation of aerodynamic influence coefficients for aeroelastic analysis on a transputer network

    Science.gov (United States)

    Janetzke, David C.; Murthy, Durbha V.

    1991-01-01

    Aeroelastic analysis is multi-disciplinary and computationally expensive. Hence, it can greatly benefit from parallel processing. As part of an effort to develop an aeroelastic capability on a distributed memory transputer network, a parallel algorithm for the computation of aerodynamic influence coefficients is implemented on a network of 32 transputers. The aerodynamic influence coefficients are calculated using a 3-D unsteady aerodynamic model and a parallel discretization. Efficiencies up to 85 percent were demonstrated using 32 processors. The effect of subtask ordering, problem size, and network topology are presented. A comparison to results on a shared memory computer indicates that higher speedup is achieved on the distributed memory system.

  5. Trust Models in Ubiquitous Computing

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Krukow, Karl; Sassone, Vladimiro

    2008-01-01

    We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models.......We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models....

  6. Concepciones acerca de la maternidad en la educación formal y no formal

    Directory of Open Access Journals (Sweden)

    Alvarado Calderón, Kathia

    2005-06-01

    Full Text Available Este artículo presenta algunos resultados de la investigación desarrollada en el Instituto de Investigación en Educación (INIE, bajo el nombre "Construcción del concepto de maternidad en la educación formal y no formal". Utilizando un enfoque cualitativo de investigación, recurrimos a las técnicas de elaboración de dibujos, entrevistas y grupo focal como recursos para la recolección de la información. De esta manera, podemos acercarnos a las concepciones de la maternidad que utilizan los participantes de las diferentes instancias educativas (formal y no formal con quienes se trabajó. This article presents some results the research developed in the Instituto de Investigación en Educación (INIE, named "Construcción del concepto de maternidad en la educación formal y no formal". It begins with a theoretical analysis about social conceptions regarding motherhood in the occidental societies. Among the techniques for gathering information were thematic drawing, interview and focus group, using a qualitative approach research method. This is followed by a brief summary of main findings. The article concludes with a proposal of future working lines for the deconstruction of the motherhood concept in formal and informal education contexts.

  7. Some computational challenges of developing efficient parallel algorithms for data-dependent computations in thermal-hydraulics supercomputer applications

    International Nuclear Information System (INIS)

    Woodruff, S.B.

    1992-01-01

    The Transient Reactor Analysis Code (TRAC), which features a two- fluid treatment of thermal-hydraulics, is designed to model transients in water reactors and related facilities. One of the major computational costs associated with TRAC and similar codes is calculating constitutive coefficients. Although the formulations for these coefficients are local the costs are flow-regime- or data-dependent; i.e., the computations needed for a given spatial node often vary widely as a function of time. Consequently, poor load balancing will degrade efficiency on either vector or data parallel architectures when the data are organized according to spatial location. Unfortunately, a general automatic solution to the load-balancing problem associated with data-dependent computations is not yet available for massively parallel architectures. This document discusses why developers algorithms, such as a neural net representation, that do not exhibit algorithms, such as a neural net representation, that do not exhibit load-balancing problems

  8. Formalized Informal Learning

    DEFF Research Database (Denmark)

    Levinsen, Karin Tweddell; Sørensen, Birgitte Holm

    2013-01-01

    are examined and the relation between network society competences, learners’ informal learning strategies and ICT in formalized school settings over time is studied. The authors find that aspects of ICT like multimodality, intuitive interaction design and instant feedback invites an informal bricoleur approach....... When integrated into certain designs for teaching and learning, this allows for Formalized Informal Learning and support is found for network society competences building....

  9. Computing the non-Markovian coarse-grained interactions derived from the Mori-Zwanzig formalism in molecular systems: Application to polymer melts

    Science.gov (United States)

    Li, Zhen; Lee, Hee Sun; Darve, Eric; Karniadakis, George Em

    2017-01-01

    Memory effects are often introduced during coarse-graining of a complex dynamical system. In particular, a generalized Langevin equation (GLE) for the coarse-grained (CG) system arises in the context of Mori-Zwanzig formalism. Upon a pairwise decomposition, GLE can be reformulated into its pairwise version, i.e., non-Markovian dissipative particle dynamics (DPD). GLE models the dynamics of a single coarse particle, while DPD considers the dynamics of many interacting CG particles, with both CG systems governed by non-Markovian interactions. We compare two different methods for the practical implementation of the non-Markovian interactions in GLE and DPD systems. More specifically, a direct evaluation of the non-Markovian (NM) terms is performed in LE-NM and DPD-NM models, which requires the storage of historical information that significantly increases computational complexity. Alternatively, we use a few auxiliary variables in LE-AUX and DPD-AUX models to replace the non-Markovian dynamics with a Markovian dynamics in a higher dimensional space, leading to a much reduced memory footprint and computational cost. In our numerical benchmarks, the GLE and non-Markovian DPD models are constructed from molecular dynamics (MD) simulations of star-polymer melts. Results show that a Markovian dynamics with auxiliary variables successfully generates equivalent non-Markovian dynamics consistent with the reference MD system, while maintaining a tractable computational cost. Also, transient subdiffusion of the star-polymers observed in the MD system can be reproduced by the coarse-grained models. The non-interacting particle models, LE-NM/AUX, are computationally much cheaper than the interacting particle models, DPD-NM/AUX. However, the pairwise models with momentum conservation are more appropriate for correctly reproducing the long-time hydrodynamics characterised by an algebraic decay in the velocity autocorrelation function.

  10. Formalizing WS-BPEL and Higher Order Mobile Embedded Business Processes in the Bigraphical Programming Languages (BPL) Tool

    DEFF Research Database (Denmark)

    Bundgaard, Mikkel; Glenstrup, Arne John; Hildebrandt, Thomas

    is the starting point of an endeavor to provide a completely formalized and extensible business process engine within the Computer Supported Mobile Adaptive Business Processes (CosmoBiz) research project at the IT University of Copenhagen. Building upon the formalization of WS-BPEL we propose and formalize Home......BPEL, a higher-order WS-BPEL-like business process execution language where processes are first-class values that can be stored in variables, passed as messages, and activated as embedded sub-instances. A sub-instance is similar to a WS-BPEL scope, except that it can be dynamically frozen and stored as a process...

  11. Cartoon computation: quantum-like computing without quantum mechanics

    International Nuclear Information System (INIS)

    Aerts, Diederik; Czachor, Marek

    2007-01-01

    We present a computational framework based on geometric structures. No quantum mechanics is involved, and yet the algorithms perform tasks analogous to quantum computation. Tensor products and entangled states are not needed-they are replaced by sets of basic shapes. To test the formalism we solve in geometric terms the Deutsch-Jozsa problem, historically the first example that demonstrated the potential power of quantum computation. Each step of the algorithm has a clear geometric interpretation and allows for a cartoon representation. (fast track communication)

  12. Investigation of Schottky-Barrier carbon nanotube field-effect transistor by an efficient semi-classical numerical modeling

    International Nuclear Information System (INIS)

    Chen Changxin; Zhang Wei; Zhao Bo; Zhang Yafei

    2009-01-01

    An efficient semi-classical numerical modeling approach has been developed to simulate the coaxial Schottky-barrier carbon nanotube field-effect transistor (SB-CNTFET). In the modeling, the electrostatic potential of the CNT is obtained by self-consistently solving the analytic expression of CNT carrier distribution and the cylindrical Poisson equation, which significantly enhances the computational efficiency and simultaneously present a result in good agreement to that obtained from the non-equilibrium Green's function (NEGF) formalism based on the first principle. With this method, the effects of the CNT diameter, power supply voltage, thickness and dielectric constant of gate insulator on the device performance are investigated.

  13. The base of the iceberg: informal learning and its impact on formal and non-formal learning

    OpenAIRE

    Rogers, Alan

    2014-01-01

    The author looks at learning (formal, non-formal and informal) and examines the hidden world of informal (unconscious, unplanned) learning. He points out the importance of informal learning for creating tacit attitudes and values, knowledge and skills which influence (conscious, planned) learning - formal and non-formal. Moreover, he explores the implications of informal learning for educational planners and teachers in the context of lifelong learning. While mainly aimed at adult educators, ...

  14. New and More General Rational Formal Solutions to (2+1)-Dimensional Toda System

    International Nuclear Information System (INIS)

    Bai Chenglin

    2007-01-01

    With the aid of computerized symbolic computation and Riccati equation rational expansion approach, some new and more general rational formal solutions to (2+1)-dimensional Toda system are obtained. The method used here can also be applied to solve other nonlinear differential-difference equation or equations.

  15. Robust fault detection of linear systems using a computationally efficient set-membership method

    DEFF Research Database (Denmark)

    Tabatabaeipour, Mojtaba; Bak, Thomas

    2014-01-01

    In this paper, a computationally efficient set-membership method for robust fault detection of linear systems is proposed. The method computes an interval outer-approximation of the output of the system that is consistent with the model, the bounds on noise and disturbance, and the past measureme...... is trivially parallelizable. The method is demonstrated for fault detection of a hydraulic pitch actuator of a wind turbine. We show the effectiveness of the proposed method by comparing our results with two zonotope-based set-membership methods....

  16. An efficient method for computing the absorption of solar radiation by water vapor

    Science.gov (United States)

    Chou, M.-D.; Arking, A.

    1981-01-01

    Chou and Arking (1980) have developed a fast but accurate method for computing the IR cooling rate due to water vapor. Using a similar approach, the considered investigation develops a method for computing the heating rates due to the absorption of solar radiation by water vapor in the wavelength range from 4 to 8.3 micrometers. The validity of the method is verified by comparison with line-by-line calculations. An outline is provided of an efficient method for transmittance and flux computations based upon actual line parameters. High speed is achieved by employing a one-parameter scaling approximation to convert an inhomogeneous path into an equivalent homogeneous path at suitably chosen reference conditions.

  17. Defect correction and multigrid for an efficient and accurate computation of airfoil flows

    NARCIS (Netherlands)

    Koren, B.

    1988-01-01

    Results are presented for an efficient solution method for second-order accurate discretizations of the 2D steady Euler equations. The solution method is based on iterative defect correction. Several schemes are considered for the computation of the second-order defect. In each defect correction

  18. Processing-Efficient Distributed Adaptive RLS Filtering for Computationally Constrained Platforms

    Directory of Open Access Journals (Sweden)

    Noor M. Khan

    2017-01-01

    Full Text Available In this paper, a novel processing-efficient architecture of a group of inexpensive and computationally incapable small platforms is proposed for a parallely distributed adaptive signal processing (PDASP operation. The proposed architecture runs computationally expensive procedures like complex adaptive recursive least square (RLS algorithm cooperatively. The proposed PDASP architecture operates properly even if perfect time alignment among the participating platforms is not available. An RLS algorithm with the application of MIMO channel estimation is deployed on the proposed architecture. Complexity and processing time of the PDASP scheme with MIMO RLS algorithm are compared with sequentially operated MIMO RLS algorithm and liner Kalman filter. It is observed that PDASP scheme exhibits much lesser computational complexity parallely than the sequential MIMO RLS algorithm as well as Kalman filter. Moreover, the proposed architecture provides an improvement of 95.83% and 82.29% decreased processing time parallely compared to the sequentially operated Kalman filter and MIMO RLS algorithm for low doppler rate, respectively. Likewise, for high doppler rate, the proposed architecture entails an improvement of 94.12% and 77.28% decreased processing time compared to the Kalman and RLS algorithms, respectively.

  19. Formal languages, automata and numeration systems, v.2

    CERN Document Server

    Rigo, Michel

    2014-01-01

    The interplay between words, computability, algebra and arithmetic has now proved its relevance and fruitfulness. Indeed, the cross-fertilization between formal logic and finite automata (such as that initiated by J.R. Büchi) or between combinatorics on words and number theory has paved the way to recent dramatic developments, for example, the transcendence results for the real numbers having a "simple" binary expansion, by B. Adamczewski and Y. Bugeaud. This book is at the heart of this interplay through a unified exposition. Objects are considered with a perspective that comes both from t

  20. Energy-Efficient FPGA-Based Parallel Quasi-Stochastic Computing

    Directory of Open Access Journals (Sweden)

    Ramu Seva

    2017-11-01

    Full Text Available The high performance of FPGA (Field Programmable Gate Array in image processing applications is justified by its flexible reconfigurability, its inherent parallel nature and the availability of a large amount of internal memories. Lately, the Stochastic Computing (SC paradigm has been found to be significantly advantageous in certain application domains including image processing because of its lower hardware complexity and power consumption. However, its viability is deemed to be limited due to its serial bitstream processing and excessive run-time requirement for convergence. To address these issues, a novel approach is proposed in this work where an energy-efficient implementation of SC is accomplished by introducing fast-converging Quasi-Stochastic Number Generators (QSNGs and parallel stochastic bitstream processing, which are well suited to leverage FPGA’s reconfigurability and abundant internal memory resources. The proposed approach has been tested on the Virtex-4 FPGA, and results have been compared with the serial and parallel implementations of conventional stochastic computation using the well-known SC edge detection and multiplication circuits. Results prove that by using this approach, execution time, as well as the power consumption are decreased by a factor of 3.5 and 4.5 for the edge detection circuit and multiplication circuit, respectively.

  1. Efficiency Analysis of the Parallel Implementation of the SIMPLE Algorithm on Multiprocessor Computers

    Science.gov (United States)

    Lashkin, S. V.; Kozelkov, A. S.; Yalozo, A. V.; Gerasimov, V. Yu.; Zelensky, D. K.

    2017-12-01

    This paper describes the details of the parallel implementation of the SIMPLE algorithm for numerical solution of the Navier-Stokes system of equations on arbitrary unstructured grids. The iteration schemes for the serial and parallel versions of the SIMPLE algorithm are implemented. In the description of the parallel implementation, special attention is paid to computational data exchange among processors under the condition of the grid model decomposition using fictitious cells. We discuss the specific features for the storage of distributed matrices and implementation of vector-matrix operations in parallel mode. It is shown that the proposed way of matrix storage reduces the number of interprocessor exchanges. A series of numerical experiments illustrates the effect of the multigrid SLAE solver tuning on the general efficiency of the algorithm; the tuning involves the types of the cycles used (V, W, and F), the number of iterations of a smoothing operator, and the number of cells for coarsening. Two ways (direct and indirect) of efficiency evaluation for parallelization of the numerical algorithm are demonstrated. The paper presents the results of solving some internal and external flow problems with the evaluation of parallelization efficiency by two algorithms. It is shown that the proposed parallel implementation enables efficient computations for the problems on a thousand processors. Based on the results obtained, some general recommendations are made for the optimal tuning of the multigrid solver, as well as for selecting the optimal number of cells per processor.

  2. Efficient Adjoint Computation of Hybrid Systems of Differential Algebraic Equations with Applications in Power Systems

    Energy Technology Data Exchange (ETDEWEB)

    Abhyankar, Shrirang [Argonne National Lab. (ANL), Argonne, IL (United States); Anitescu, Mihai [Argonne National Lab. (ANL), Argonne, IL (United States); Constantinescu, Emil [Argonne National Lab. (ANL), Argonne, IL (United States); Zhang, Hong [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-03-31

    Sensitivity analysis is an important tool to describe power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this work, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating trajectory sensitivities of larger systems and is consistent, within machine precision, with the function whose sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as DC exciters, by deriving and implementing the adjoint jump conditions that arise from state and time-dependent discontinuities. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach.

  3. Gaussian Radial Basis Function for Efficient Computation of Forest Indirect Illumination

    Science.gov (United States)

    Abbas, Fayçal; Babahenini, Mohamed Chaouki

    2018-06-01

    Global illumination of natural scenes in real time like forests is one of the most complex problems to solve, because the multiple inter-reflections between the light and material of the objects composing the scene. The major problem that arises is the problem of visibility computation. In fact, the computing of visibility is carried out for all the set of leaves visible from the center of a given leaf, given the enormous number of leaves present in a tree, this computation performed for each leaf of the tree which also reduces performance. We describe a new approach that approximates visibility queries, which precede in two steps. The first step is to generate point cloud representing the foliage. We assume that the point cloud is composed of two classes (visible, not-visible) non-linearly separable. The second step is to perform a point cloud classification by applying the Gaussian radial basis function, which measures the similarity in term of distance between each leaf and a landmark leaf. It allows approximating the visibility requests to extract the leaves that will be used to calculate the amount of indirect illumination exchanged between neighbor leaves. Our approach allows efficiently treat the light exchanges in the scene of a forest, it allows a fast computation and produces images of good visual quality, all this takes advantage of the immense power of computation of the GPU.

  4. Formalized Epistemology, Logic, and Grammar

    Science.gov (United States)

    Bitbol, Michel

    The task of a formal epistemology is defined. It appears that a formal epistemology must be a generalization of "logic" in the sense of Wittgenstein's Tractatus. The generalization is required because, whereas logic presupposes a strict relation between activity and language, this relation may be broken in some domains of experimental enquiry (e.g., in microscopic physics). However, a formal epistemology should also retain a major feature of Wittgenstein's "logic": It must not be a discourse about scientific knowledge, but rather a way of making manifest the structures usually implicit in knowledge-gaining activity. This strategy is applied to the formalism of quantum mechanics.

  5. Astrophysical applications of the post-Tolman-Oppenheimer-Volkoff formalism

    Science.gov (United States)

    Glampedakis, Kostas; Pappas, George; Silva, Hector O.; Berti, Emanuele

    2016-08-01

    The bulk properties of spherically symmetric stars in general relativity can be obtained by integrating the Tolman-Oppenheimer-Volkoff (TOV) equations. In previous work [K. Glampedakis, G. Pappas, H. O. Silva, and E. Berti, Phys. Rev. D 92, 024056 (2015)], we developed a "post-TOV" formalism—inspired by parametrized post-Newtonian theory—which allows us to classify in a parametrized, phenomenological form all possible perturbative deviations from the structure of compact stars in general relativity that may be induced by modified gravity at second post-Newtonian order. In this paper we extend the formalism to deal with the stellar exterior, and we compute several potential astrophysical observables within the post-TOV formalism: the surface redshift zs, the apparent radius Rapp, the Eddington luminosity at infinity LE∞ and the orbital frequencies. We show that, at leading order, all of these quantities depend on just two post-TOV parameters μ1 and χ , and we discuss the possibility to measure (or set upper bounds on) these parameters.

  6. Energy efficient distributed computing systems

    CERN Document Server

    Lee, Young-Choon

    2012-01-01

    The energy consumption issue in distributed computing systems raises various monetary, environmental and system performance concerns. Electricity consumption in the US doubled from 2000 to 2005.  From a financial and environmental standpoint, reducing the consumption of electricity is important, yet these reforms must not lead to performance degradation of the computing systems.  These contradicting constraints create a suite of complex problems that need to be resolved in order to lead to 'greener' distributed computing systems.  This book brings together a group of outsta

  7. Null canonical formalism 1, Maxwell field. [Poisson brackets, boundary conditions

    Energy Technology Data Exchange (ETDEWEB)

    Wodkiewicz, K [Warsaw Univ. (Poland). Inst. Fizyki Teoretycznej

    1975-01-01

    The purpose of this paper is to formulate the canonical formalism on null hypersurfaces for the Maxwell electrodynamics. The set of the Poisson brackets relations for null variables of the Maxwell field is obtained. The asymptotic properties of the theory are investigated. The Poisson bracket relations for the news-functions of the Maxwell field are computed. The Hamiltonian form of the asymptotic Maxwell equations in terms of these news-functions is obtained.

  8. Efficiency improvement opportunities for personal computer monitors. Implications for market transformation programs

    Energy Technology Data Exchange (ETDEWEB)

    Park, Won Young; Phadke, Amol; Shah, Nihar [Environmental Energy Technologies Division, Lawrence Berkeley National Laboratory, Berkeley, CA (United States)

    2013-08-15

    Displays account for a significant portion of electricity consumed in personal computer (PC) use, and global PC monitor shipments are expected to continue to increase. We assess the market trends in the energy efficiency of PC monitors that are likely to occur without any additional policy intervention and estimate that PC monitor efficiency will likely improve by over 40 % by 2015 with saving potential of 4.5 TWh per year in 2015, compared to today's technology. We discuss various energy-efficiency improvement options and evaluate the cost-effectiveness of three of them, at least one of which improves efficiency by at least 20 % cost effectively beyond the ongoing market trends. We assess the potential for further improving efficiency taking into account the recent development of universal serial bus-powered liquid crystal display monitors and find that the current technology available and deployed in them has the potential to deeply and cost effectively reduce energy consumption by as much as 50 %. We provide insights for policies and programs that can be used to accelerate the adoption of efficient technologies to further capture global energy saving potential from PC monitors which we estimate to be 9.2 TWh per year in 2015.

  9. Indigenous Knowledge and Education from the Quechua Community to School: Beyond the Formal/Non-Formal Dichotomy

    Science.gov (United States)

    Sumida Huaman, Elizabeth; Valdiviezo, Laura Alicia

    2014-01-01

    In this article, we propose to approach Indigenous education beyond the formal/non-formal dichotomy. We argue that there is a critical need to conscientiously include Indigenous knowledge in education processes from the school to the community; particularly, when formal systems exclude Indigenous cultures and languages. Based on ethnographic…

  10. New procedure for departure formalities

    CERN Multimedia

    HR & GS Departments

    2011-01-01

    As part of the process of simplifying procedures and rationalising administrative processes, the HR and GS Departments have introduced new personalised departure formalities on EDH. These new formalities have applied to students leaving CERN since last year and from 17 October 2011 this procedure will be extended to the following categories of CERN personnel: Staff members, Fellows and Associates. It is planned to extend this electronic procedure to the users in due course. What purpose do departure formalities serve? The departure formalities are designed to ensure that members of the personnel contact all the relevant services in order to return any necessary items (equipment, cards, keys, dosimeter, electronic equipment, books, etc.) and are aware of all the benefits to which they are entitled on termination of their contract. The new departure formalities on EDH have the advantage of tailoring the list of services that each member of the personnel must visit to suit his individual contractual and p...

  11. Efficiency Improvement Opportunities for Personal Computer Monitors. Implications for Market Transformation Programs

    Energy Technology Data Exchange (ETDEWEB)

    Park, Won Young [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Phadke, Amol [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Shah, Nihar [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-06-29

    Displays account for a significant portion of electricity consumed in personal computer (PC) use, and global PC monitor shipments are expected to continue to increase. We assess the market trends in the energy efficiency of PC monitors that are likely to occur without any additional policy intervention and estimate that display efficiency will likely improve by over 40% by 2015 compared to today’s technology. We evaluate the cost effectiveness of a key technology which further improves efficiency beyond this level by at least 20% and find that its adoption is cost effective. We assess the potential for further improving efficiency taking into account the recent development of universal serial bus (USB) powered liquid crystal display (LCD) monitors and find that the current technology available and deployed in USB powered monitors has the potential to deeply reduce energy consumption by as much as 50%. We provide insights for policies and programs that can be used to accelerate the adoption of efficient technologies to capture global energy saving potential from PC monitors which we estimate to be 9.2 terawatt-hours [TWh] per year in 2015.

  12. Efficient Geo-Computational Algorithms for Constructing Space-Time Prisms in Road Networks

    Directory of Open Access Journals (Sweden)

    Hui-Ping Chen

    2016-11-01

    Full Text Available The Space-time prism (STP is a key concept in time geography for analyzing human activity-travel behavior under various Space-time constraints. Most existing time-geographic studies use a straightforward algorithm to construct STPs in road networks by using two one-to-all shortest path searches. However, this straightforward algorithm can introduce considerable computational overhead, given the fact that accessible links in a STP are generally a small portion of the whole network. To address this issue, an efficient geo-computational algorithm, called NTP-A*, is proposed. The proposed NTP-A* algorithm employs the A* and branch-and-bound techniques to discard inaccessible links during two shortest path searches, and thereby improves the STP construction performance. Comprehensive computational experiments are carried out to demonstrate the computational advantage of the proposed algorithm. Several implementation techniques, including the label-correcting technique and the hybrid link-node labeling technique, are discussed and analyzed. Experimental results show that the proposed NTP-A* algorithm can significantly improve STP construction performance in large-scale road networks by a factor of 100, compared with existing algorithms.

  13. IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report

    Energy Technology Data Exchange (ETDEWEB)

    William M. Bond; Salih Ersayin

    2007-03-30

    This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency of individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern

  14. Semi-classical scalar propagators in curved backgrounds: formalism and ambiguities

    Energy Technology Data Exchange (ETDEWEB)

    Grain, J. [Laboratory for Subatomic Physics and Cosmology, Grenoble Universites, CNRS, IN2P3, 53, avenue de Martyrs, 38026 Grenoble cedex (France)]|[AstroParticle and Cosmology, Universite Paris 7, CNRS, IN2P3, 10, rue Alice Domon et Leonie Duquet, 75205 Paris cedex 13 (France); Barrau, A. [Laboratory for Subatomic Physics and Cosmology, Grenoble Universites, CNRS, IN2P3, 53, avenue de Martyrs, 38026 Grenoble cedex (France)

    2007-05-15

    The phenomenology of quantum systems in curved space-times is among the most fascinating fields of physics, allowing - often at the Gedanken experiment level - constraints on tentative theories of quantum gravity. Determining the dynamics of fields in curved backgrounds remains however a complicated task because of the highly intricate partial differential equations involved, especially when the space metric exhibits no symmetry. In this article, we provide - in a pedagogical way - a general formalism to determine this dynamics at the semi-classical order. To this purpose, a generic expression for the semi-classical propagator is computed and the equation of motion for the probability four-current is derived. Those results underline a direct analogy between the computation of the propagator in general relativistic quantum mechanics and the computation of the propagator for stationary systems in non-relativistic quantum mechanics. (authors)

  15. Semi-classical scalar propagators in curved backgrounds: formalism and ambiguities

    International Nuclear Information System (INIS)

    Grain, J.; Barrau, A.

    2007-05-01

    The phenomenology of quantum systems in curved space-times is among the most fascinating fields of physics, allowing - often at the Gedanken experiment level - constraints on tentative theories of quantum gravity. Determining the dynamics of fields in curved backgrounds remains however a complicated task because of the highly intricate partial differential equations involved, especially when the space metric exhibits no symmetry. In this article, we provide - in a pedagogical way - a general formalism to determine this dynamics at the semi-classical order. To this purpose, a generic expression for the semi-classical propagator is computed and the equation of motion for the probability four-current is derived. Those results underline a direct analogy between the computation of the propagator in general relativistic quantum mechanics and the computation of the propagator for stationary systems in non-relativistic quantum mechanics. (authors)

  16. Analyzing thresholds and efficiency with hierarchical Bayesian logistic regression.

    Science.gov (United States)

    Houpt, Joseph W; Bittner, Jennifer L

    2018-05-10

    Ideal observer analysis is a fundamental tool used widely in vision science for analyzing the efficiency with which a cognitive or perceptual system uses available information. The performance of an ideal observer provides a formal measure of the amount of information in a given experiment. The ratio of human to ideal performance is then used to compute efficiency, a construct that can be directly compared across experimental conditions while controlling for the differences due to the stimuli and/or task specific demands. In previous research using ideal observer analysis, the effects of varying experimental conditions on efficiency have been tested using ANOVAs and pairwise comparisons. In this work, we present a model that combines Bayesian estimates of psychometric functions with hierarchical logistic regression for inference about both unadjusted human performance metrics and efficiencies. Our approach improves upon the existing methods by constraining the statistical analysis using a standard model connecting stimulus intensity to human observer accuracy and by accounting for variability in the estimates of human and ideal observer performance scores. This allows for both individual and group level inferences. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Concepts of formal concept analysis

    Science.gov (United States)

    Žáček, Martin; Homola, Dan; Miarka, Rostislav

    2017-07-01

    The aim of this article is apply of Formal Concept Analysis on concept of world. Formal concept analysis (FCA) as a methodology of data analysis, information management and knowledge representation has potential to be applied to a verity of linguistic problems. FCA is mathematical theory for concepts and concept hierarchies that reflects an understanding of concept. Formal concept analysis explicitly formalizes extension and intension of a concept, their mutual relationships. A distinguishing feature of FCA is an inherent integration of three components of conceptual processing of data and knowledge, namely, the discovery and reasoning with concepts in data, discovery and reasoning with dependencies in data, and visualization of data, concepts, and dependencies with folding/unfolding capabilities.

  18. Computationally efficient SVM multi-class image recognition with confidence measures

    International Nuclear Information System (INIS)

    Makili, Lazaro; Vega, Jesus; Dormido-Canto, Sebastian; Pastor, Ignacio; Murari, Andrea

    2011-01-01

    Typically, machine learning methods produce non-qualified estimates, i.e. the accuracy and reliability of the predictions are not provided. Transductive predictors are very recent classifiers able to provide, simultaneously with the prediction, a couple of values (confidence and credibility) to reflect the quality of the prediction. Usually, a drawback of the transductive techniques for huge datasets and large dimensionality is the high computational time. To overcome this issue, a more efficient classifier has been used in a multi-class image classification problem in the TJ-II stellarator database. It is based on the creation of a hash function to generate several 'one versus the rest' classifiers for every class. By using Support Vector Machines as the underlying classifier, a comparison between the pure transductive approach and the new method has been performed. In both cases, the success rates are high and the computation time with the new method is up to 0.4 times the old one.

  19. Casulo como habitáculo e componente formal construtivo

    Directory of Open Access Journals (Sweden)

    Sasquia Hizuru Obata

    2010-01-01

    Full Text Available The term cocoon has derived formal definition of the biological activity of shelter that involves an animal or a group of animals during a cycle or life period, possessing efficiently for this inactive phase or reproductive cycle a skin characteristic expresses or like a flexible involucre. The important characteristics are the isolation of activities, restrict space to development of a certain action, lightness, flexibility and the natural condition, represent metaphorical goal in conventional projects. As constructive goals, the searches in the natural ways of the cocoons can be translated in results and combinations in geometric ways modulate or even bubbles and capsules of historical architectural projects to the most recent. The identification of cocoon in buildings leads the proposition that module is a constructive component and a formal departure, in other words, a project strategy that assists the energy reduction, besides the micro-climate creation, flexibility and constructive lightness.

  20. 12 CFR 622.21 - Computing time.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Computing time. 622.21 Section 622.21 Banks and... Formal Hearings § 622.21 Computing time. (a) General rule. In computing any period of time prescribed or... run is not to be included. The last day so computed shall be included, unless it is a Saturday, Sunday...

  1. Critical formalism or digital biomorphology. The contemporary architecture formal dilema

    Directory of Open Access Journals (Sweden)

    Beatriz Villanueva Cajide

    2018-05-01

    Full Text Available With the dawn of digital media the architecture’s formal possibilities reached a level unknown before. The Guggenheim Museo branch in Bilbao appears in 1993 as the materialisation of the possibilities of the use of digital tools in architecture’s design, starting the development of a digital based architecture which currently has reached an exhaustion level that is evident in the repetition biomorphologic shapes emerged from the digital determinism to which some contemporary architectural practices have converged. While the digitalisation of the architectural process is irreversible and desirable, it is necessary to rethink the terms of this collaboration beyond the possibilities of the digital tools themselves. This article proposes to analyse seven texts written in the very moment when digitalisation became a real possibility, between Gehry’s conception of the Guggenheim Museum in 1992 and the Congress on Morphogenesis hold in the Architectural Association in 2004, in order to explore the possibility of reversing the process that has led to the formal exhaustion of digital architecture, from the acceptance of incorporating strategies coming from a contemporary critical formalism.

  2. A formal model for classifying trusted Semantic Web Services

    OpenAIRE

    Galizia, Stefania; Gugliotta, Alessio; Pedrinaci, Carlos

    2008-01-01

    Semantic Web Services (SWS) aim to alleviate Web service limitations, by combining Web service technologies with the potential of Semantic Web. Several open issues have to be tackled yet, in order to enable a safe and efficient Web services selection. One of them is represented by trust. In this paper, we introduce a trust definition and formalize a model for managing trust in SWS. The model approaches the selection of trusted Web services as a classification problem, and it is realized by an...

  3. Geometry and Formal Linguistics.

    Science.gov (United States)

    Huff, George A.

    This paper presents a method of encoding geometric line-drawings in a way which allows sets of such drawings to be interpreted as formal languages. A characterization of certain geometric predicates in terms of their properties as languages is obtained, and techniques usually associated with generative grammars and formal automata are then applied…

  4. Learning with Ubiquitous Computing

    Science.gov (United States)

    Rosenheck, Louisa

    2008-01-01

    If ubiquitous computing becomes a reality and is widely adopted, it will inevitably have an impact on education. This article reviews the background of ubiquitous computing and current research projects done involving educational "ubicomp." Finally it explores how ubicomp may and may not change education in both formal and informal settings and…

  5. SmartVeh: Secure and Efficient Message Access Control and Authentication for Vehicular Cloud Computing.

    Science.gov (United States)

    Huang, Qinlong; Yang, Yixian; Shi, Yuxiang

    2018-02-24

    With the growing number of vehicles and popularity of various services in vehicular cloud computing (VCC), message exchanging among vehicles under traffic conditions and in emergency situations is one of the most pressing demands, and has attracted significant attention. However, it is an important challenge to authenticate the legitimate sources of broadcast messages and achieve fine-grained message access control. In this work, we propose SmartVeh, a secure and efficient message access control and authentication scheme in VCC. A hierarchical, attribute-based encryption technique is utilized to achieve fine-grained and flexible message sharing, which ensures that vehicles whose persistent or dynamic attributes satisfy the access policies can access the broadcast message with equipped on-board units (OBUs). Message authentication is enforced by integrating an attribute-based signature, which achieves message authentication and maintains the anonymity of the vehicles. In order to reduce the computations of the OBUs in the vehicles, we outsource the heavy computations of encryption, decryption and signing to a cloud server and road-side units. The theoretical analysis and simulation results reveal that our secure and efficient scheme is suitable for VCC.

  6. Efficient quantum computation in a network with probabilistic gates and logical encoding

    DEFF Research Database (Denmark)

    Borregaard, J.; Sørensen, A. S.; Cirac, J. I.

    2017-01-01

    An approach to efficient quantum computation with probabilistic gates is proposed and analyzed in both a local and nonlocal setting. It combines heralded gates previously studied for atom or atomlike qubits with logical encoding from linear optical quantum computation in order to perform high......-fidelity quantum gates across a quantum network. The error-detecting properties of the heralded operations ensure high fidelity while the encoding makes it possible to correct for failed attempts such that deterministic and high-quality gates can be achieved. Importantly, this is robust to photon loss, which...... is typically the main obstacle to photonic-based quantum information processing. Overall this approach opens a path toward quantum networks with atomic nodes and photonic links....

  7. Formal Testing of Correspondence Carrying Software

    NARCIS (Netherlands)

    Bujorianu, M.C.; Bujorianu, L.M.; Maharaj, S.

    2008-01-01

    Nowadays formal software development is characterised by use of multitude formal specification languages. Test case generation from formal specifications depends in general on a specific language, and, moreover, there are competing methods for each language. There is a need for a generic approach to

  8. Low-cost, high-performance and efficiency computational photometer design

    Science.gov (United States)

    Siewert, Sam B.; Shihadeh, Jeries; Myers, Randall; Khandhar, Jay; Ivanov, Vitaly

    2014-05-01

    Researchers at the University of Alaska Anchorage and University of Colorado Boulder have built a low cost high performance and efficiency drop-in-place Computational Photometer (CP) to test in field applications ranging from port security and safety monitoring to environmental compliance monitoring and surveying. The CP integrates off-the-shelf visible spectrum cameras with near to long wavelength infrared detectors and high resolution digital snapshots in a single device. The proof of concept combines three or more detectors into a single multichannel imaging system that can time correlate read-out, capture, and image process all of the channels concurrently with high performance and energy efficiency. The dual-channel continuous read-out is combined with a third high definition digital snapshot capability and has been designed using an FPGA (Field Programmable Gate Array) to capture, decimate, down-convert, re-encode, and transform images from two standard definition CCD (Charge Coupled Device) cameras at 30Hz. The continuous stereo vision can be time correlated to megapixel high definition snapshots. This proof of concept has been fabricated as a fourlayer PCB (Printed Circuit Board) suitable for use in education and research for low cost high efficiency field monitoring applications that need multispectral and three dimensional imaging capabilities. Initial testing is in progress and includes field testing in ports, potential test flights in un-manned aerial systems, and future planned missions to image harsh environments in the arctic including volcanic plumes, ice formation, and arctic marine life.

  9. A New Method of Histogram Computation for Efficient Implementation of the HOG Algorithm

    Directory of Open Access Journals (Sweden)

    Mariana-Eugenia Ilas

    2018-03-01

    Full Text Available In this paper we introduce a new histogram computation method to be used within the histogram of oriented gradients (HOG algorithm. The new method replaces the arctangent with the slope computation and the classical magnitude allocation based on interpolation with a simpler algorithm. The new method allows a more efficient implementation of HOG in general, and particularly in field-programmable gate arrays (FPGAs, by considerably reducing the area (thus increasing the level of parallelism, while maintaining very close classification accuracy compared to the original algorithm. Thus, the new method is attractive for many applications, including car detection and classification.

  10. On the group of substitutions of formal power series with integer coefficients

    International Nuclear Information System (INIS)

    Babenko, I K; Bogatyi, S A

    2008-01-01

    We study certain properties of the group J(Z) of substitutions of formal power series in one variable with integer coefficients. We show that J(Z), regarded as a topological group, has four generators and cannot be generated by fewer elements. In particular, we show that the one-dimensional continuous homology of J(Z) is isomorphic to Z oplus Z oplus Z 2 oplus Z 2 . We study various topological and geometric properties of the coset space J(R)/J(Z). We compute the real cohomology H-tilde*(J(Z);R) with uniformly locally constant supports and show that it is naturally isomorphic to the cohomology of the nilpotent part of the Lie algebra of formal vector fields on the line

  11. Use of the Web by a Distributed Research group Performing Distributed Computing

    Science.gov (United States)

    Burke, David A.; Peterkin, Robert E.

    2001-06-01

    A distributed research group that uses distributed computers faces a spectrum of challenges--some of which can be met by using various electronic means of communication. The particular challenge of our group involves three physically separated research entities. We have had to link two collaborating groups at AFRL and NRL together for software development, and the same AFRL group with a LANL group for software applications. We are developing and using a pair of general-purpose, portable, parallel, unsteady, plasma physics simulation codes. The first collaboration is centered around a formal weekly video teleconference on relatively inexpensive equipment that we have set up in convenient locations in our respective laboratories. The formal virtual meetings are augmented with informal virtual meetings as the need arises. Both collaborations share research data in a variety of forms on a secure URL that is set up behind the firewall at the AFRL. Of course, a computer-generated animation is a particularly efficient way of displaying results from time-dependent numerical simulations, so we generally like to post such animations (along with proper documentation) on our web page. In this presentation, we will discuss some of our accomplishments and disappointments.

  12. Preparing Future Secondary Computer Science Educators

    Science.gov (United States)

    Ajwa, Iyad

    2007-01-01

    Although nearly every college offers a major in computer science, many computer science teachers at the secondary level have received little formal training. This paper presents details of a project that could make a significant contribution to national efforts to improve computer science education by combining teacher education and professional…

  13. Formal Women-only Networks

    DEFF Research Database (Denmark)

    Villesèche, Florence; Josserand, Emmanuel

    2017-01-01

    /organisations and the wider social group of women in business. Research limitations/implications: The authors focus on the distinction between external and internal formal women-only networks while also acknowledging the broader diversity that can characterise such networks. Their review provides the reader with an insight...... member level, the authors suggest that such networks can be of value for organisations and the wider social group of women in management and leadership positions.......Purpose: The purpose of this paper is to review the emerging literature on formal women-only business networks and outline propositions to develop this under-theorised area of knowledge and stimulate future research. Design/methodology/approach: The authors review the existing literature on formal...

  14. Efficient Computation of Transition State Resonances and Reaction Rates from a Quantum Normal Form

    NARCIS (Netherlands)

    Schubert, Roman; Waalkens, Holger; Wiggins, Stephen

    2006-01-01

    A quantum version of a recent formulation of transition state theory in phase space is presented. The theory developed provides an algorithm to compute quantum reaction rates and the associated Gamov-Siegert resonances with very high accuracy. The algorithm is especially efficient for

  15. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals.

    Science.gov (United States)

    Guven, Onur; Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G

    2016-06-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors' previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp-p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat.

  16. Formal analysis of design process dynamics

    NARCIS (Netherlands)

    Bosse, T.; Jonker, C.M.; Treur, J.

    2010-01-01

    This paper presents a formal analysis of design process dynamics. Such a formal analysis is a prerequisite to come to a formal theory of design and for the development of automated support for the dynamics of design processes. The analysis was geared toward the identification of dynamic design

  17. Formal Analysis of Design Process Dynamics

    NARCIS (Netherlands)

    Bosse, T.; Jonker, C.M.; Treur, J.

    2010-01-01

    This paper presents a formal analysis of design process dynamics. Such a formal analysis is a prerequisite to come to a formal theory of design and for the development of automated support for the dynamics of design processes. The analysis was geared toward the identification of dynamic design

  18. Formal and Information Learning in a Computer Clubhouse Environment

    Science.gov (United States)

    McDougall, Anne; Lowe, Jenny; Hopkins, Josie

    2004-01-01

    This paper outlines the establishment and running of an after-school Computer Clubhouse, describing aspects of the leadership, mentoring and learning activities undertaken there. Research data has been collected from examination of documents associated with the Clubhouse, interviews with its founders, Director, session leaders and mentors, and…

  19. A efetividade do treinamento auditivo formal em idosos usuários de próteses auditivas no período de aclimatização Formal auditory training efficiency in elderly during the acclimatization period

    Directory of Open Access Journals (Sweden)

    Elisiane de Crestani Miranda

    2007-12-01

    Full Text Available OBJETIVO: Verificar a efetividade de um programa de treinamento auditivo formal em idosos usuários de próteses auditivas intraaurais no período de aclimatização. MÉTODOS: A amostra foi composta por 18 idosos (idade média: 71, 38 anos, de ambos os sexos, adaptados há uma semana com próteses auditivas intra-aurais binaurais. Os participantes foram randomizados em dois grupos: Grupo Experimental (submetidos ao treinamento auditivo e Grupo Controle (não submetidos ao treinamento auditivo. O Grupo Experimental participou de sete sessões de treinamento auditivo em cabina acústica, uma sessão por semana, com duração de 50 minutos cada. Os procedimentos de avaliação incluíram testes de reconhecimento de fala e questionário de auto-avaliação do handicap auditivo. Estes foram aplicados em duas oportunidades, antes (1ª avaliação e depois (2ª avaliação do treinamento auditivo no Grupo Experimental e na avaliação inicial e final do estudo no Grupo Controle. RESULTADOS: No Grupo Experimental, o Índice de Reconhecimento de Fala e Fala com Ruído Branco foram significantemente melhores após o treinamento auditivo (2ª avaliação. Já o estudo das relações sinal/ruído no teste de reconhecimento de sentenças no ruído revelou uma tendência (p-valor próximo a 0,05 de melhora na avaliação pós-treinamento. Observou-se nos idosos do Grupo Experimental que os resultados obtidos na 2ª avaliação não foram significantemente melhores aos obtidos no Grupo Controle em todos os testes. CONCLUSÃO: Pode-se concluir que um programa de reabilitação aural, incluindo treinamento auditivo formal beneficia os idosos no período de adaptação das próteses auditivas, bem como modifica o comportamento auditivo destes indivíduos.PURPOSE: To investigate the efficiency of a formal auditory training program in hearing aid wearers during the acclimatization period. METHODS: Eighteen subjects (mean age of 71.38 years old, male and female

  20. Efficient Use of Preisach Hysteresis Model in Computer Aided Design

    Directory of Open Access Journals (Sweden)

    IONITA, V.

    2013-05-01

    Full Text Available The paper presents a practical detailed analysis regarding the use of the classical Preisach hysteresis model, covering all the steps, from measuring the necessary data for the model identification to the implementation in a software code for Computer Aided Design (CAD in Electrical Engineering. An efficient numerical method is proposed and the hysteresis modeling accuracy is tested on magnetic recording materials. The procedure includes the correction of the experimental data, which are used for the hysteresis model identification, taking into account the demagnetizing effect for the sample that is measured in an open-circuit device (a vibrating sample magnetometer.

  1. Formalizing the concept of sound.

    Energy Technology Data Exchange (ETDEWEB)

    Kaper, H. G.; Tipei, S.

    1999-08-03

    The notion of formalized music implies that a musical composition can be described in mathematical terms. In this article we explore some formal aspects of music and propose a framework for an abstract approach.

  2. Formal Analysis of Domain Models

    National Research Council Canada - National Science Library

    Bharadwaj, Ramesh

    2002-01-01

    Recently, there has been a great deal of interest in the application of formal methods, in particular, precise formal notations and automatic analysis tools for the creation and analysis of requirements specifications (i.e...

  3. Preferences for and Barriers to Formal and Informal Athletic Training Continuing Education Activities

    Science.gov (United States)

    Armstrong, Kirk J.; Weidner, Thomas G.

    2011-01-01

    Context: Our previous research determined the frequency of participation and perceived effect of formal and informal continuing education (CE) activities. However, actual preferences for and barriers to CE must be characterized. Objective: To determine the types of formal and informal CE activities preferred by athletic trainers (ATs) and barriers to their participation in these activities. Design: Cross-sectional study. Setting: Athletic training practice settings. Patients or Other Participants: Of a geographically stratified random sample of 1000 ATs, 427 ATs (42.7%) completed the survey. Main Outcome Measure(s): As part of a larger study, the Survey of Formal and Informal Athletic Training Continuing Education Activities (FIATCEA) was developed and administered electronically. The FIATCEA consists of demographic characteristics and Likert scale items (1 = strongly disagree, 5 = strongly agree) about preferred CE activities and barriers to these activities. Internal consistency of survey items, as determined by Cronbach α, was 0.638 for preferred CE activities and 0.860 for barriers to these activities. Descriptive statistics were computed for all items. Differences between respondent demographic characteristics and preferred CE activities and barriers to these activities were determined via analysis of variance and dependent t tests. The α level was set at .05. Results: Hands-on clinical workshops and professional networking were the preferred formal and informal CE activities, respectively. The most frequently reported barriers to formal CE were the cost of attending and travel distance, whereas the most frequently reported barriers to informal CE were personal and job-specific factors. Differences were noted between both the cost of CE and travel distance to CE and all other barriers to CE participation (F1,411 = 233.54, P formal CE activities. The same barriers (eg, cost, travel distance) to formal CE appeared to be universal to all ATs. Informal CE was

  4. From Three-Photon Greenberger-Horne-Zeilinger States to Ballistic Universal Quantum Computation.

    Science.gov (United States)

    Gimeno-Segovia, Mercedes; Shadbolt, Pete; Browne, Dan E; Rudolph, Terry

    2015-07-10

    Single photons, manipulated using integrated linear optics, constitute a promising platform for universal quantum computation. A series of increasingly efficient proposals have shown linear-optical quantum computing to be formally scalable. However, existing schemes typically require extensive adaptive switching, which is experimentally challenging and noisy, thousands of photon sources per renormalized qubit, and/or large quantum memories for repeat-until-success strategies. Our work overcomes all these problems. We present a scheme to construct a cluster state universal for quantum computation, which uses no adaptive switching, no large memories, and which is at least an order of magnitude more resource efficient than previous passive schemes. Unlike previous proposals, it is constructed entirely from loss-detecting gates and offers a robustness to photon loss. Even without the use of an active loss-tolerant encoding, our scheme naturally tolerates a total loss rate ∼1.6% in the photons detected in the gates. This scheme uses only 3 Greenberger-Horne-Zeilinger states as a resource, together with a passive linear-optical network. We fully describe and model the iterative process of cluster generation, including photon loss and gate failure. This demonstrates that building a linear-optical quantum computer needs to be less challenging than previously thought.

  5. Dissemination actions and the popularization of the Exact Sciences by virtual environments and non-formal spaces of education

    Directory of Open Access Journals (Sweden)

    Carlos Coimbra-Araujo

    2017-08-01

    Full Text Available For several reasons, the Exact Sciences have been shown as one of the areas of scientific knowledge that most demand actions in non-formal spaces of education. One of the main reasons lies in the fact that Mathematics, Physics, Chemistry and Astronomy are traditionally addressed, within the school environment and in the formal curriculum, unrelated to the student reality. Such subjects are often seen as a set of inflexible and incomprehensible principles. In this aspect, the present work reviews the main problems surrounding the teaching of the mentioned scientific areas, highlighting non-formal tools for the teaching of Mathematics, Physics, Chemistry, Astronomy and, in particular, the modern virtual environments of teaching modeled by Computing Science. Other historical difficulties that the formal education of Exact Sciences has suffered in Brazil are also presented, as well some of the main non-formal resources sought to complement the curriculum that is usually presented in the classroom.

  6. Parallel peak pruning for scalable SMP contour tree computation

    Energy Technology Data Exchange (ETDEWEB)

    Carr, Hamish A. [Univ. of Leeds (United Kingdom); Weber, Gunther H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Davis, CA (United States); Sewell, Christopher M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ahrens, James P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-09

    As data sets grow to exascale, automated data analysis and visualisation are increasingly important, to intermediate human understanding and to reduce demands on disk storage via in situ analysis. Trends in architecture of high performance computing systems necessitate analysis algorithms to make effective use of combinations of massively multicore and distributed systems. One of the principal analytic tools is the contour tree, which analyses relationships between contours to identify features of more than local importance. Unfortunately, the predominant algorithms for computing the contour tree are explicitly serial, and founded on serial metaphors, which has limited the scalability of this form of analysis. While there is some work on distributed contour tree computation, and separately on hybrid GPU-CPU computation, there is no efficient algorithm with strong formal guarantees on performance allied with fast practical performance. Here in this paper, we report the first shared SMP algorithm for fully parallel contour tree computation, withfor-mal guarantees of O(lgnlgt) parallel steps and O(n lgn) work, and implementations with up to 10x parallel speed up in OpenMP and up to 50x speed up in NVIDIA Thrust.

  7. Integrating Research and Education at the National Center for Atmospheric Research at the Interface of Formal and Informal Education

    Science.gov (United States)

    Johnson, R.; Foster, S.

    2005-12-01

    The National Center for Atmospheric Research (NCAR) in Boulder, Colorado, is a leading institution in scientific research, education and service associated with exploring and understanding our atmosphere and its interactions with the Sun, the oceans, the biosphere, and human society. NCAR draws thousands of public and scientific visitors from around the world to its Mesa Laboratory facility annually for educational as well as research purposes. Public visitors include adult visitors, clubs, and families on an informal visit to NCAR and its exhibits, as well as classroom and summer camp groups. Additionally, NCAR provides extensive computational and visualization services, which can be used not only for scientific, but also public informational purposes. As such, NCAR's audience provides an opportunity to address both formal and informal education through the programs that we offer. The University Corporation for Atmospheric Research (UCAR) Office of Education and Outreach works with NCAR to develop and implement a highly-integrated strategy for reaching both formal and informal audiences through programs that range from events and exhibits to professional development (for scientists and educators) and bilingual distance learning. The hallmarks of our program include close collaboration with scientists, multi-purposing resources where appropriate for maximum efficiency, and a commitment to engage populations historically underrepresented in science in the geosciences.

  8. Scalable Techniques for Formal Verification

    CERN Document Server

    Ray, Sandip

    2010-01-01

    This book presents state-of-the-art approaches to formal verification techniques to seamlessly integrate different formal verification methods within a single logical foundation. It should benefit researchers and practitioners looking to get a broad overview of the spectrum of formal verification techniques, as well as approaches to combining such techniques within a single framework. Coverage includes a range of case studies showing how such combination is fruitful in developing a scalable verification methodology for industrial designs. This book outlines both theoretical and practical issue

  9. Computational Psychiatry and the Challenge of Schizophrenia

    Science.gov (United States)

    Murray, John D.; Chekroud, Adam M.; Corlett, Philip R.; Yang, Genevieve; Wang, Xiao-Jing; Anticevic, Alan

    2017-01-01

    Abstract Schizophrenia research is plagued by enormous challenges in integrating and analyzing large datasets and difficulties developing formal theories related to the etiology, pathophysiology, and treatment of this disorder. Computational psychiatry provides a path to enhance analyses of these large and complex datasets and to promote the development and refinement of formal models for features of this disorder. This presentation introduces the reader to the notion of computational psychiatry and describes discovery-oriented and theory-driven applications to schizophrenia involving machine learning, reinforcement learning theory, and biophysically-informed neural circuit models. PMID:28338845

  10. Time-dependent internal density functional theory formalism and Kohn-Sham scheme for self-bound systems

    International Nuclear Information System (INIS)

    Messud, Jeremie

    2009-01-01

    The stationary internal density functional theory (DFT) formalism and Kohn-Sham scheme are generalized to the time-dependent case. It is proven that, in the time-dependent case, the internal properties of a self-bound system (such as an atomic nuclei or a helium droplet) are all defined by the internal one-body density and the initial state. A time-dependent internal Kohn-Sham scheme is set up as a practical way to compute the internal density. The main difference from the traditional DFT formalism and Kohn-Sham scheme is the inclusion of the center-of-mass correlations in the functional.

  11. The software for automatic creation of the formal grammars used by speech recognition, computer vision, editable text conversion systems, and some new functions

    Science.gov (United States)

    Kardava, Irakli; Tadyszak, Krzysztof; Gulua, Nana; Jurga, Stefan

    2017-02-01

    For more flexibility of environmental perception by artificial intelligence it is needed to exist the supporting software modules, which will be able to automate the creation of specific language syntax and to make a further analysis for relevant decisions based on semantic functions. According of our proposed approach, of which implementation it is possible to create the couples of formal rules of given sentences (in case of natural languages) or statements (in case of special languages) by helping of computer vision, speech recognition or editable text conversion system for further automatic improvement. In other words, we have developed an approach, by which it can be achieved to significantly improve the training process automation of artificial intelligence, which as a result will give us a higher level of self-developing skills independently from us (from users). At the base of our approach we have developed a software demo version, which includes the algorithm and software code for the entire above mentioned component's implementation (computer vision, speech recognition and editable text conversion system). The program has the ability to work in a multi - stream mode and simultaneously create a syntax based on receiving information from several sources.

  12. A Survey of Formal Methods in Software Development

    DEFF Research Database (Denmark)

    Bjørner, Dines

    2012-01-01

    The use of formal methods and formal techniques in industry is steadily growing. In this survey we shall characterise what we mean by software development and by a formal method; briefly overview a history of formal specification languages - some of which are: VDM (Vienna Development Method, 1974...... need for multi-language formalisation (Petri Nets, MSC, StateChart, Temporal Logics); the sociology of university and industry acceptance of formal methods; the inevitability of the use of formal software development methods; while referring to seminal monographs and textbooks on formal methods....

  13. The Integration of Formal and Non-formal Education: The Dutch “brede school”

    Directory of Open Access Journals (Sweden)

    du Bois-Reymond, Manuela

    2009-12-01

    Full Text Available The Dutch “brede school” (BS development originates in the 1990s and has spread unevenly since: quicker in the primary than secondary educational sector. In 2007, there were about 1000 primary and 350 secondary BS schools and it is the intention of the government as well as the individual municipalities to extend that number and make the BS the dominant school form of the near future. In the primary sector, a BS cooperates with crèche and preschool facilities, besides possible other neighborhood partners. The main targets are, first, to enhance educational opportunities, particularly for children with little (western- cultural capital, and secondly to increase women’s labor market participation by providing extra familial care for babies and small children. All primary schools are now obliged to provide such care. In the secondary sector, a BS is less neighborhood-orientated than a primary BS because those schools are bigger and more often located in different buildings. As in the primary sector, there are broad and more narrow BS, the first profile cooperating with many non-formal and other partners and facilities and the second with few. On the whole, there is a wide variety of BS schools, with different profiles and objectives, dependent on the needs and wishes of the initiators and the neighborhood. A BS is always the result of initiatives of the respective school and its partners: parents, other neighborhood associations, municipality etc. BS schools are not enforced by the government although the general trend will be that existing school organizations transform into BS. The integration of formal and non-formal education and learning is more advanced in primary than secondary schools. In secondary education, vocational as well as general, there is a clear dominance of formal education; the non-formal curriculum serves mainly two lines and objectives: first, provide attractive leisure activities and second provide compensatory

  14. Formal Specification and Verification of Real-Time Multi-Agent Systems using Timed-Arc Petri Nets

    Directory of Open Access Journals (Sweden)

    QASIM, A.

    2015-08-01

    Full Text Available In this study we have formally specified and verified the actions of communicating real-time software agents (RTAgents. Software agents are expected to work autonomously and deal with unfamiliar situations astutely. Achieving cent percent test cases coverage for these agents has always been a problem due to limited resources. Also a high degree of dependability and predictability is expected from real-time software agents. In this research we have used Timed-Arc Petri Net's for formal specification and verification. Formal specification of e-agents has been done in the past using Linear Temporal Logic (LTL but we believe that Timed-Arc Petri Net's being more visually expressive provides a richer framework for such formalism. A case study of Stock Market System (SMS based on Real Time Multi Agent System framework (RTMAS using Timed-Arc Petri Net's is taken to illustrate the proposed modeling approach. The model was verified used AF, AG, EG, and EF fragments of Timed Computational Tree Logic (TCTL via translations to timed automata.

  15. An Efficient UD-Based Algorithm for the Computation of Maximum Likelihood Sensitivity of Continuous-Discrete Systems

    DEFF Research Database (Denmark)

    Boiroux, Dimitri; Juhl, Rune; Madsen, Henrik

    2016-01-01

    This paper addresses maximum likelihood parameter estimation of continuous-time nonlinear systems with discrete-time measurements. We derive an efficient algorithm for the computation of the log-likelihood function and its gradient, which can be used in gradient-based optimization algorithms....... This algorithm uses UD decomposition of symmetric matrices and the array algorithm for covariance update and gradient computation. We test our algorithm on the Lotka-Volterra equations. Compared to the maximum likelihood estimation based on finite difference gradient computation, we get a significant speedup...

  16. Efficient computation of spaced seeds

    Directory of Open Access Journals (Sweden)

    Ilie Silvana

    2012-02-01

    Full Text Available Abstract Background The most frequently used tools in bioinformatics are those searching for similarities, or local alignments, between biological sequences. Since the exact dynamic programming algorithm is quadratic, linear-time heuristics such as BLAST are used. Spaced seeds are much more sensitive than the consecutive seed of BLAST and using several seeds represents the current state of the art in approximate search for biological sequences. The most important aspect is computing highly sensitive seeds. Since the problem seems hard, heuristic algorithms are used. The leading software in the common Bernoulli model is the SpEED program. Findings SpEED uses a hill climbing method based on the overlap complexity heuristic. We propose a new algorithm for this heuristic that improves its speed by over one order of magnitude. We use the new implementation to compute improved seeds for several software programs. We compute as well multiple seeds of the same weight as MegaBLAST, that greatly improve its sensitivity. Conclusion Multiple spaced seeds are being successfully used in bioinformatics software programs. Enabling researchers to compute very fast high quality seeds will help expanding the range of their applications.

  17. A Formal Methods Approach to the Analysis of Mode Confusion

    Science.gov (United States)

    Butler, Ricky W.; Miller, Steven P.; Potts, James N.; Carreno, Victor A.

    2004-01-01

    The goal of the new NASA Aviation Safety Program (AvSP) is to reduce the civil aviation fatal accident rate by 80% in ten years and 90% in twenty years. This program is being driven by the accident data with a focus on the most recent history. Pilot error is the most commonly cited cause for fatal accidents (up to 70%) and obviously must be given major consideration in this program. While the greatest source of pilot error is the loss of situation awareness , mode confusion is increasingly becoming a major contributor as well. The January 30, 1995 issue of Aviation Week lists 184 incidents and accidents involving mode awareness including the Bangalore A320 crash 2/14/90, the Strasbourg A320 crash 1/20/92, the Mulhouse-Habsheim A320 crash 6/26/88, and the Toulouse A330 crash 6/30/94. These incidents and accidents reveal that pilots sometimes become confused about what the cockpit automation is doing. Consequently, human factors research is an obvious investment area. However, even a cursory look at the accident data reveals that the mode confusion problem is much deeper than just training deficiencies and a lack of human-oriented design. This is readily acknowledged by human factors experts. It seems that further progress in human factors must come through a deeper scrutiny of the internals of the automation. It is in this arena that formal methods can contribute. Formal methods refers to the use of techniques from logic and discrete mathematics in the specification, design, and verification of computer systems, both hardware and software. The fundamental goal of formal methods is to capture requirements, designs and implementations in a mathematically based model that can be analyzed in a rigorous manner. Research in formal methods is aimed at automating this analysis as much as possible. By capturing the internal behavior of a flight deck in a rigorous and detailed formal model, the dark corners of a design can be analyzed. This paper will explore how formal

  18. Modelling, abstraction, and computation in systems biology: A view from computer science.

    Science.gov (United States)

    Melham, Tom

    2013-04-01

    Systems biology is centrally engaged with computational modelling across multiple scales and at many levels of abstraction. Formal modelling, precise and formalised abstraction relationships, and computation also lie at the heart of computer science--and over the past decade a growing number of computer scientists have been bringing their discipline's core intellectual and computational tools to bear on biology in fascinating new ways. This paper explores some of the apparent points of contact between the two fields, in the context of a multi-disciplinary discussion on conceptual foundations of systems biology. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Discrete mathematics, formal methods, the Z schema and the software life cycle

    Science.gov (United States)

    Bown, Rodney L.

    1991-01-01

    The proper role and scope for the use of discrete mathematics and formal methods in support of engineering the security and integrity of components within deployed computer systems are discussed. It is proposed that the Z schema can be used as the specification language to capture the precise definition of system and component interfaces. This can be accomplished with an object oriented development paradigm.

  20. An Overview of Computer-Based Natural Language Processing.

    Science.gov (United States)

    Gevarter, William B.

    Computer-based Natural Language Processing (NLP) is the key to enabling humans and their computer-based creations to interact with machines using natural languages (English, Japanese, German, etc.) rather than formal computer languages. NLP is a major research area in the fields of artificial intelligence and computational linguistics. Commercial…

  1. Single neuron computation

    CERN Document Server

    McKenna, Thomas M; Zornetzer, Steven F

    1992-01-01

    This book contains twenty-two original contributions that provide a comprehensive overview of computational approaches to understanding a single neuron structure. The focus on cellular-level processes is twofold. From a computational neuroscience perspective, a thorough understanding of the information processing performed by single neurons leads to an understanding of circuit- and systems-level activity. From the standpoint of artificial neural networks (ANNs), a single real neuron is as complex an operational unit as an entire ANN, and formalizing the complex computations performed by real n

  2. Formal model-based development for safety-critical embedded software

    International Nuclear Information System (INIS)

    Kim, Jin Hyun; Choi, Jin Young

    2005-01-01

    Safety-critical embedded software for nuclear I and C system is developed under the safety and reliability regulation. Programmable logic controller(PLC) is a computer system for instrumentation and control (I and C) system of nuclear power plants. PLC consists of various I and C logics in software, including real-time operating system (RTOS). Hence, errors related with RTOS should be detected and eliminated in development processes. Practically, the verification and validation for errors in RTOS is performed in test procedure, in which a lot of tasks for testing are embedded in RTOS and are running under a test environments. But the test process can not be enough to guarantee the safety and reliability of RTOS. Therefore, in this paper, we introduce to applying formal methods with the development of software for the PLC. We particularity apply formal methods to a development of RTOS for PLC, which is a safety critical level. In this development, we use the state charts of I-Logix to specify and verification and model checking to verify the specification

  3. Formal model-based development for safety-critical embedded software

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jin Hyun; Choi, Jin Young [Korea University, seoul (Korea, Republic of)

    2005-11-15

    Safety-critical embedded software for nuclear I and C system is developed under the safety and reliability regulation. Programmable logic controller(PLC) is a computer system for instrumentation and control (I and C) system of nuclear power plants. PLC consists of various I and C logics in software, including real-time operating system (RTOS). Hence, errors related with RTOS should be detected and eliminated in development processes. Practically, the verification and validation for errors in RTOS is performed in test procedure, in which a lot of tasks for testing are embedded in RTOS and are running under a test environments. But the test process can not be enough to guarantee the safety and reliability of RTOS. Therefore, in this paper, we introduce to applying formal methods with the development of software for the PLC. We particularity apply formal methods to a development of RTOS for PLC, which is a safety critical level. In this development, we use the state charts of I-Logix to specify and verification and model checking to verify the specification.

  4. An Efficient and Secure m-IPS Scheme of Mobile Devices for Human-Centric Computing

    Directory of Open Access Journals (Sweden)

    Young-Sik Jeong

    2014-01-01

    Full Text Available Recent rapid developments in wireless and mobile IT technologies have led to their application in many real-life areas, such as disasters, home networks, mobile social networks, medical services, industry, schools, and the military. Business/work environments have become wire/wireless, integrated with wireless networks. Although the increase in the use of mobile devices that can use wireless networks increases work efficiency and provides greater convenience, wireless access to networks represents a security threat. Currently, wireless intrusion prevention systems (IPSs are used to prevent wireless security threats. However, these are not an ideal security measure for businesses that utilize mobile devices because they do not take account of temporal-spatial and role information factors. Therefore, in this paper, an efficient and secure mobile-IPS (m-IPS is proposed for businesses utilizing mobile devices in mobile environments for human-centric computing. The m-IPS system incorporates temporal-spatial awareness in human-centric computing with various mobile devices and checks users’ temporal spatial information, profiles, and role information to provide precise access control. And it also can extend application of m-IPS to the Internet of things (IoT, which is one of the important advanced technologies for supporting human-centric computing environment completely, for real ubiquitous field with mobile devices.

  5. Increasing the computational efficient of digital cross correlation by a vectorization method

    Science.gov (United States)

    Chang, Ching-Yuan; Ma, Chien-Ching

    2017-08-01

    This study presents a vectorization method for use in MATLAB programming aimed at increasing the computational efficiency of digital cross correlation in sound and images, resulting in a speedup of 6.387 and 36.044 times compared with performance values obtained from looped expression. This work bridges the gap between matrix operations and loop iteration, preserving flexibility and efficiency in program testing. This paper uses numerical simulation to verify the speedup of the proposed vectorization method as well as experiments to measure the quantitative transient displacement response subjected to dynamic impact loading. The experiment involved the use of a high speed camera as well as a fiber optic system to measure the transient displacement in a cantilever beam under impact from a steel ball. Experimental measurement data obtained from the two methods are in excellent agreement in both the time and frequency domain, with discrepancies of only 0.68%. Numerical and experiment results demonstrate the efficacy of the proposed vectorization method with regard to computational speed in signal processing and high precision in the correlation algorithm. We also present the source code with which to build MATLAB-executable functions on Windows as well as Linux platforms, and provide a series of examples to demonstrate the application of the proposed vectorization method.

  6. Guidelines for enhancing learning curiosity of non-formal student using daily life context

    Directory of Open Access Journals (Sweden)

    Mongkondaw Ornwipa

    2016-01-01

    Full Text Available The purposes of this study were: to study learning curiosity within student, teacher and administrators, and to suggest the student of non-formal education learning curiosity by using daily life context. A sample was selected from a group of student of non-formal education for 400 students, categorized to 184 students of secondary education, students of high school education 216, 40 teachers of non-formal education and 20 administrators with district level of the office of the Non - Formal and Informal Education by Multi - Stage Sampling. The research tools were surveyed by using questionnaire of students. The results of the study were as follows and the questionnaire as learning curiosity of the teacher and administrator from the Non - Formal and Informal Education awareness, and transcribing from focus group discussion. The quantitative analysis by the computer program (SPSS for statistical analysis and analyzing qualitative data by content analysis were included. The results of the study were as follows a student learning curiosity was in high level, a student supporting for learning curiosity in occupation was in high level, the teacher opinion for learning curiosity of student was in middle level. The supporting should be academic, Work and family consecutive. The administrator of the Non - Formal and Informal Education thought, learning curiosity of student was in middle level. The student should be gained occupation knowledge for the first, because of their lifestyle in the north eastern of Thailand; they needed to support their family. Almost citizens were agriculturist, gardener, farmer or merchandiser, and then to permit academic education, family and socialization, the occupation developing was given precedence.

  7. Formal Development of the HRP Prover - Part 1: Syntax and Semantics

    International Nuclear Information System (INIS)

    Sivertsen, Terje

    1996-01-01

    This report describes the formal development of a new version of the HRP Prover. The new version of the tool will have functionality almost identical to the current version, but is developed in accordance to established principles for applying algebraic specification in formal software development. The development project provides results of relevance to the formal development of a wide range of language-oriented tools, including programming language compilers, as well as to the automatic generation of code from specifications. Since the overall scope of this report is the analysis of algebraic specifications, emphasis is given to topics related to what is usually understood as the ''front end'' of compilers. This includes lexical and syntax analysis of the specifications, static semantics through type checking, and dynamic semantics through evaluation. All the different phases are specified in algebraic specification and supported by the current version of the HRP Prover. In subsequent work, the completed parts of the new version will complement the tool support in the development. The work presented will be followed up by formal specification of theorem proving and transformation, as well as code generation into conventional programming languages. The new version of the HRP Prover is incrementally developed in coherence with the specifications produced in these activities. At the same time, the development of the tool demonstrates the efficient use of the methodology through real application on an increasingly important class of software. (author)

  8. Formalizing Probabilistic Safety Claims

    Science.gov (United States)

    Herencia-Zapana, Heber; Hagen, George E.; Narkawicz, Anthony J.

    2011-01-01

    A safety claim for a system is a statement that the system, which is subject to hazardous conditions, satisfies a given set of properties. Following work by John Rushby and Bev Littlewood, this paper presents a mathematical framework that can be used to state and formally prove probabilistic safety claims. It also enables hazardous conditions, their uncertainties, and their interactions to be integrated into the safety claim. This framework provides a formal description of the probabilistic composition of an arbitrary number of hazardous conditions and their effects on system behavior. An example is given of a probabilistic safety claim for a conflict detection algorithm for aircraft in a 2D airspace. The motivation for developing this mathematical framework is that it can be used in an automated theorem prover to formally verify safety claims.

  9. Augmenting Reality and Formality of Informal and Non-Formal Settings to Enhance Blended Learning

    Science.gov (United States)

    Pérez-Sanagustin, Mar; Hernández-Leo, Davinia; Santos, Patricia; Kloos, Carlos Delgado; Blat, Josep

    2014-01-01

    Visits to museums and city tours have been part of higher and secondary education curriculum activities for many years. However these activities are typically considered "less formal" when compared to those carried out in the classroom, mainly because they take place in informal or non-formal settings. Augmented Reality (AR) technologies…

  10. Superfield formalism

    Indian Academy of Sciences (India)

    dimensional superfields, is a clear signature of the presence of the (anti-)BRST invariance in the original. 4D theory. Keywords. Non-Abelian 1-form gauge theory; Dirac fields; (anti-)Becchi–Roucet–Stora–. Tyutin invariance; superfield formalism; ...

  11. Errors in measuring absorbed radiation and computing crop radiation use efficiency

    International Nuclear Information System (INIS)

    Gallo, K.P.; Daughtry, C.S.T.; Wiegand, C.L.

    1993-01-01

    Radiation use efficiency (RUE) is often a crucial component of crop growth models that relate dry matter production to energy received by the crop. RUE is a ratio that has units g J -1 , if defined as phytomass per unit of energy received, and units J J -1 , if defined as the energy content of phytomass per unit of energy received. Both the numerator and denominator in computation of RUE can vary with experimental assumptions and methodologies. The objectives of this study were to examine the effect that different methods of measuring the numerator and denominator have on the RUE of corn (Zea mays L.) and to illustrate this variation with experimental data. Computational methods examined included (i) direct measurements of the fraction of photosynthetically active radiation absorbed (f A ), (ii) estimates of f A derived from leaf area index (LAI), and (iii) estimates of f A derived from spectral vegetation indices. Direct measurements of absorbed PAR from planting to physiological maturity of corn were consistently greater than the indirect estimates based on green LAI or the spectral vegetation indices. Consequently, the RUE calculated using directly measured absorbed PAR was lower than the RUE calculated using the indirect measures of absorbed PAR. For crops that contain senesced vegetation, green LAI and the spectral vegetation indices provide appropriate estimates of the fraction of PAR absorbed by a crop canopy and, thus, accurate estimates of crop radiation use efficiency

  12. An accurate and computationally efficient small-scale nonlinear FEA of flexible risers

    OpenAIRE

    Rahmati, MT; Bahai, H; Alfano, G

    2016-01-01

    This paper presents a highly efficient small-scale, detailed finite-element modelling method for flexible risers which can be effectively implemented in a fully-nested (FE2) multiscale analysis based on computational homogenisation. By exploiting cyclic symmetry and applying periodic boundary conditions, only a small fraction of a flexible pipe is used for a detailed nonlinear finite-element analysis at the small scale. In this model, using three-dimensional elements, all layer components are...

  13. A comparison of efficient methods for the computation of Born gluon amplitudes

    International Nuclear Information System (INIS)

    Dinsdale, Michael; Ternick, Marko; Weinzierl, Stefan

    2006-01-01

    We compare four different methods for the numerical computation of the pure gluonic amplitudes in the Born approximation. We are in particular interested in the efficiency of the various methods as the number n of the external particles increases. In addition we investigate the numerical accuracy in critical phase space regions. The methods considered are based on (i) Berends-Giele recurrence relations, (ii) scalar diagrams, (iii) MHV vertices and (iv) BCF recursion relations

  14. An Efficient Integer Coding and Computing Method for Multiscale Time Segment

    Directory of Open Access Journals (Sweden)

    TONG Xiaochong

    2016-12-01

    Full Text Available This article focus on the exist problem and status of current time segment coding, proposed a new set of approach about time segment coding: multi-scale time segment integer coding (MTSIC. This approach utilized the tree structure and the sort by size formed among integer, it reflected the relationship among the multi-scale time segments: order, include/contained, intersection, etc., and finally achieved an unity integer coding processing for multi-scale time. On this foundation, this research also studied the computing method for calculating the time relationships of MTSIC, to support an efficient calculation and query based on the time segment, and preliminary discussed the application method and prospect of MTSIC. The test indicated that, the implement of MTSIC is convenient and reliable, and the transformation between it and the traditional method is convenient, it has the very high efficiency in query and calculating.

  15. New Computational Approach to Electron Transport in Irregular Graphene Nanostructures

    Science.gov (United States)

    Mason, Douglas; Heller, Eric; Prendergast, David; Neaton, Jeffrey

    2009-03-01

    For novel graphene devices of nanoscale-to-macroscopic scale, many aspects of their transport properties are not easily understood due to difficulties in fabricating devices with regular edges. Here we develop a framework to efficiently calculate and potentially screen electronic transport properties of arbitrary nanoscale graphene device structures. A generalization of the established recursive Green's function method is presented, providing access to arbitrary device and lead geometries with substantial computer-time savings. Using single-orbital nearest-neighbor tight-binding models and the Green's function-Landauer scattering formalism, we will explore the transmission function of irregular two-dimensional graphene-based nanostructures with arbitrary lead orientation. Prepared by LBNL under contract DE-AC02-05CH11231 and supported by the U.S. Dept. of Energy Computer Science Graduate Fellowship under grant DE-FG02-97ER25308.

  16. MOOC & B-Learning: Students' Barriers and Satisfaction in Formal and Non-Formal Learning Environments

    Science.gov (United States)

    Gutiérrez-Santiuste, Elba; Gámiz-Sánchez, Vanesa-M.; Gutiérrez-Pérez, Jose

    2015-01-01

    The study presents a comparative analysis of two virtual learning formats: one non-formal through a Massive Open Online Course (MOOC) and the other formal through b-learning. We compare the communication barriers and the satisfaction perceived by the students (N = 249) by developing a qualitative analysis using semi-structured questionnaires and…

  17. Formal Methods Applications in Air Transportation

    Science.gov (United States)

    Farley, Todd

    2009-01-01

    The U.S. air transportation system is the most productive in the world, moving far more people and goods than any other. It is also the safest system in the world, thanks in part to its venerable air traffic control system. But as demand for air travel continues to grow, the air traffic control system s aging infrastructure and labor-intensive procedures are impinging on its ability to keep pace with demand. And that impinges on the growth of our economy. Air traffic control modernization has long held the promise of a more efficient air transportation system. Part of NASA s current mission is to develop advanced automation and operational concepts that will expand the capacity of our national airspace system while still maintaining its excellent record for safety. It is a challenging mission, as efforts to modernize have, for decades, been hamstrung by the inability to assure safety to the satisfaction of system operators, system regulators, and/or the traveling public. In this talk, we ll provide a brief history of air traffic control, focusing on the tension between efficiency and safety assurance, and the promise of formal methods going forward.

  18. Asymptotic optimality and efficient computation of the leave-subject-out cross-validation

    KAUST Repository

    Xu, Ganggang

    2012-12-01

    Although the leave-subject-out cross-validation (CV) has been widely used in practice for tuning parameter selection for various nonparametric and semiparametric models of longitudinal data, its theoretical property is unknown and solving the associated optimization problem is computationally expensive, especially when there are multiple tuning parameters. In this paper, by focusing on the penalized spline method, we show that the leave-subject-out CV is optimal in the sense that it is asymptotically equivalent to the empirical squared error loss function minimization. An efficient Newton-type algorithm is developed to compute the penalty parameters that optimize the CV criterion. Simulated and real data are used to demonstrate the effectiveness of the leave-subject-out CV in selecting both the penalty parameters and the working correlation matrix. © 2012 Institute of Mathematical Statistics.

  19. Computationally efficient method for optical simulation of solar cells and their applications

    Science.gov (United States)

    Semenikhin, I.; Zanuccoli, M.; Fiegna, C.; Vyurkov, V.; Sangiorgi, E.

    2013-01-01

    This paper presents two novel implementations of the Differential method to solve the Maxwell equations in nanostructured optoelectronic solid state devices. The first proposed implementation is based on an improved and computationally efficient T-matrix formulation that adopts multiple-precision arithmetic to tackle the numerical instability problem which arises due to evanescent modes. The second implementation adopts the iterative approach that allows to achieve low computational complexity O(N logN) or better. The proposed algorithms may work with structures with arbitrary spatial variation of the permittivity. The developed two-dimensional numerical simulator is applied to analyze the dependence of the absorption characteristics of a thin silicon slab on the morphology of the front interface and on the angle of incidence of the radiation with respect to the device surface.

  20. Asymptotic optimality and efficient computation of the leave-subject-out cross-validation

    KAUST Repository

    Xu, Ganggang; Huang, Jianhua Z.

    2012-01-01

    Although the leave-subject-out cross-validation (CV) has been widely used in practice for tuning parameter selection for various nonparametric and semiparametric models of longitudinal data, its theoretical property is unknown and solving the associated optimization problem is computationally expensive, especially when there are multiple tuning parameters. In this paper, by focusing on the penalized spline method, we show that the leave-subject-out CV is optimal in the sense that it is asymptotically equivalent to the empirical squared error loss function minimization. An efficient Newton-type algorithm is developed to compute the penalty parameters that optimize the CV criterion. Simulated and real data are used to demonstrate the effectiveness of the leave-subject-out CV in selecting both the penalty parameters and the working correlation matrix. © 2012 Institute of Mathematical Statistics.

  1. Many-core technologies: The move to energy-efficient, high-throughput x86 computing (TFLOPS on a chip)

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    With Moore's Law alive and well, more and more parallelism is introduced into all computing platforms at all levels of integration and programming to achieve higher performance and energy efficiency. Especially in the area of High-Performance Computing (HPC) users can entertain a combination of different hardware and software parallel architectures and programming environments. Those technologies range from vectorization and SIMD computation over shared memory multi-threading (e.g. OpenMP) to distributed memory message passing (e.g. MPI) on cluster systems. We will discuss HPC industry trends and Intel's approach to it from processor/system architectures and research activities to hardware and software tools technologies. This includes the recently announced new Intel(r) Many Integrated Core (MIC) architecture for highly-parallel workloads and general purpose, energy efficient TFLOPS performance, some of its architectural features and its programming environment. At the end we will have a br...

  2. Formalization of hydrocarbon conversion scheme of catalytic cracking for mathematical model development

    Science.gov (United States)

    Nazarova, G.; Ivashkina, E.; Ivanchina, E.; Kiseleva, S.; Stebeneva, V.

    2015-11-01

    The issue of improving the energy and resource efficiency of advanced petroleum processing can be solved by the development of adequate mathematical model based on physical and chemical regularities of process reactions with a high predictive potential in the advanced petroleum refining. In this work, the development of formalized hydrocarbon conversion scheme of catalytic cracking was performed using thermodynamic parameters of reaction defined by the Density Functional Theory. The list of reaction was compiled according to the results of feedstock structural-group composition definition, which was done by the n-d-m-method, the Hazelvuda method, qualitative composition of feedstock defined by gas chromatography-mass spectrometry and individual composition of catalytic cracking gasoline fraction. Formalized hydrocarbon conversion scheme of catalytic cracking will become the basis for the development of the catalytic cracking kinetic model.

  3. Quantum many-body effects in x-ray spectra efficiently computed using a basic graph algorithm

    Science.gov (United States)

    Liang, Yufeng; Prendergast, David

    2018-05-01

    The growing interest in using x-ray spectroscopy for refined materials characterization calls for an accurate electronic-structure theory to interpret the x-ray near-edge fine structure. In this work, we propose an efficient and unified framework to describe all the many-electron processes in a Fermi liquid after a sudden perturbation (such as a core hole). This problem has been visited by the Mahan-Noziéres-De Dominicis (MND) theory, but it is intractable to implement various Feynman diagrams within first-principles calculations. Here, we adopt a nondiagrammatic approach and treat all the many-electron processes in the MND theory on an equal footing. Starting from a recently introduced determinant formalism [Phys. Rev. Lett. 118, 096402 (2017), 10.1103/PhysRevLett.118.096402], we exploit the linear dependence of determinants describing different final states involved in the spectral calculations. An elementary graph algorithm, breadth-first search, can be used to quickly identify the important determinants for shaping the spectrum, which avoids the need to evaluate a great number of vanishingly small terms. This search algorithm is performed over the tree-structure of the many-body expansion, which mimics a path-finding process. We demonstrate that the determinantal approach is computationally inexpensive even for obtaining x-ray spectra of extended systems. Using Kohn-Sham orbitals from two self-consistent fields (ground and core-excited state) as input for constructing the determinants, the calculated x-ray spectra for a number of transition metal oxides are in good agreement with experiments. Many-electron aspects beyond the Bethe-Salpeter equation, as captured by this approach, are also discussed, such as shakeup excitations and many-body wave function overlap considered in Anderson's orthogonality catastrophe.

  4. Formal Social Norms and their Enforcement in Computational MAS by Automated Reasoning

    Czech Academy of Sciences Publication Activity Database

    Neruda, Roman; Kazík, O.

    2012-01-01

    Roč. 39, č. 1 (2012), s. 80-87 ISSN 1819-9224 Institutional support: RVO:67985807 Keywords : role model * description logic * integrity constraints * computational intelligence Subject RIV: IN - Informatics, Computer Science

  5. On the Design of Energy-Efficient Location Tracking Mechanism in Location-Aware Computing

    Directory of Open Access Journals (Sweden)

    MoonBae Song

    2005-01-01

    Full Text Available The battery, in contrast to other hardware, is not governed by Moore's Law. In location-aware computing, power is a very limited resource. As a consequence, recently, a number of promising techniques in various layers have been proposed to reduce the energy consumption. The paper considers the problem of minimizing the energy used to track the location of mobile user over a wireless link in mobile computing. Energy-efficient location update protocol can be done by reducing the number of location update messages as possible and switching off as long as possible. This can be achieved by the concept of mobility-awareness we propose. For this purpose, this paper proposes a novel mobility model, called state-based mobility model (SMM to provide more generalized framework for both describing the mobility and updating location information of complexly moving objects. We also introduce the state-based location update protocol (SLUP based on this mobility model. An extensive experiment on various synthetic datasets shows that the proposed method improves the energy efficiency by 2 ∼ 3 times with the additional 10% of imprecision cost.

  6. Fast Ss-Ilm a Computationally Efficient Algorithm to Discover Socially Important Locations

    Science.gov (United States)

    Dokuz, A. S.; Celik, M.

    2017-11-01

    Socially important locations are places which are frequently visited by social media users in their social media lifetime. Discovering socially important locations provide several valuable information about user behaviours on social media networking sites. However, discovering socially important locations are challenging due to data volume and dimensions, spatial and temporal calculations, location sparseness in social media datasets, and inefficiency of current algorithms. In the literature, several studies are conducted to discover important locations, however, the proposed approaches do not work in computationally efficient manner. In this study, we propose Fast SS-ILM algorithm by modifying the algorithm of SS-ILM to mine socially important locations efficiently. Experimental results show that proposed Fast SS-ILM algorithm decreases execution time of socially important locations discovery process up to 20 %.

  7. FAST SS-ILM: A COMPUTATIONALLY EFFICIENT ALGORITHM TO DISCOVER SOCIALLY IMPORTANT LOCATIONS

    Directory of Open Access Journals (Sweden)

    A. S. Dokuz

    2017-11-01

    Full Text Available Socially important locations are places which are frequently visited by social media users in their social media lifetime. Discovering socially important locations provide several valuable information about user behaviours on social media networking sites. However, discovering socially important locations are challenging due to data volume and dimensions, spatial and temporal calculations, location sparseness in social media datasets, and inefficiency of current algorithms. In the literature, several studies are conducted to discover important locations, however, the proposed approaches do not work in computationally efficient manner. In this study, we propose Fast SS-ILM algorithm by modifying the algorithm of SS-ILM to mine socially important locations efficiently. Experimental results show that proposed Fast SS-ILM algorithm decreases execution time of socially important locations discovery process up to 20 %.

  8. An efficient computational method for a stochastic dynamic lot-sizing problem under service-level constraints

    NARCIS (Netherlands)

    Tarim, S.A.; Ozen, U.; Dogru, M.K.; Rossi, R.

    2011-01-01

    We provide an efficient computational approach to solve the mixed integer programming (MIP) model developed by Tarim and Kingsman [8] for solving a stochastic lot-sizing problem with service level constraints under the static–dynamic uncertainty strategy. The effectiveness of the proposed method

  9. Singular problems in shell theory. Computing and asymptotics

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez-Palencia, Evariste [Institut Jean Le Rond d' Alembert, Paris (France); Millet, Olivier [La Rochelle Univ. (France). LEPTIAB; Bechet, Fabien [Metz Univ. (France). LPMM

    2010-07-01

    It is known that deformations of thin shells exhibit peculiarities such as propagation of singularities, edge and internal layers, piecewise quasi inextensional deformations, sensitive problems and others, leading in most cases to numerical locking phenomena under several forms, and very poor quality of computations for small relative thickness. Most of these phenomena have a local and often anisotropic character (elongated in some directions), so that efficient numerical schemes should take them in consideration. This book deals with various topics in this context: general geometric formalism, analysis of singularities, numerical computing of thin shell problems, estimates for finite element approximation (including non-uniform and anisotropic meshes), mathematical considerations on boundary value problems in connection with sensitive problems encountered for very thin shells; and others. Most of numerical computations presented here use an adaptive anisotropic mesh procedure which allows a good computation of the physical peculiarities on one hand, and the possibility to perform automatic computations (without a previous mathematical description of the singularities) on the other. The book is recommended for PhD students, postgraduates and researchers who want to improve their knowledge in shell theory and in particular in the areas addressed (analysis of singularities, numerical computing of thin and very thin shell problems, sensitive problems). The lecture of the book may not be continuous and the reader may refer directly to the chapters concerned. (orig.)

  10. Formalizing the concept phase of product development

    NARCIS (Netherlands)

    Schuts, M.; Hooman, J.

    2015-01-01

    We discuss the use of formal techniques to improve the concept phase of product realisation. As an industrial application, a new concept of interventional X-ray systems has been formalized, using model checking techniques and the simulation of formal models. cop. Springer International Publishing

  11. The connection-set algebra--a novel formalism for the representation of connectivity structure in neuronal network models.

    Science.gov (United States)

    Djurfeldt, Mikael

    2012-07-01

    The connection-set algebra (CSA) is a novel and general formalism for the description of connectivity in neuronal network models, from small-scale to large-scale structure. The algebra provides operators to form more complex sets of connections from simpler ones and also provides parameterization of such sets. CSA is expressive enough to describe a wide range of connection patterns, including multiple types of random and/or geometrically dependent connectivity, and can serve as a concise notation for network structure in scientific writing. CSA implementations allow for scalable and efficient representation of connectivity in parallel neuronal network simulators and could even allow for avoiding explicit representation of connections in computer memory. The expressiveness of CSA makes prototyping of network structure easy. A C+ + version of the algebra has been implemented and used in a large-scale neuronal network simulation (Djurfeldt et al., IBM J Res Dev 52(1/2):31-42, 2008b) and an implementation in Python has been publicly released.

  12. Green Computing

    Directory of Open Access Journals (Sweden)

    K. Shalini

    2013-01-01

    Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.

  13. A survey of formal languages for contracts

    DEFF Research Database (Denmark)

    Hvitved, Tom

    2010-01-01

    In this short paper we present the current status on formal languages and models for contracts. By a formal model is meant an unambiguous and rigorous representation of contracts, in order to enable their automatic validation, execution, and analysis — activates that are collectively referred...... to as contract lifecycle management (CLM). We present a set of formalism requirements, which represent features that any ideal contract model should support, based on which we present a comparative survey of existing contract formalisms....

  14. Features of formalization of information for indistinct estimation and forecasting of activity of clients of the Call-center

    Directory of Open Access Journals (Sweden)

    M. G. Aznaurova

    2012-01-01

    Full Text Available Features of formalization of the entrance and target information for the module of indistinct estimation that allows to raise accuracy of forecasting and efficiency of planning of resources are considered.

  15. Formal specification and implementation of operations in information management systems

    International Nuclear Information System (INIS)

    Sandewall, E.

    1983-02-01

    Among information management systems we include general purpose systems, such as text editors and data editors (forms management systems), as well as special purpose systems such as mail systems and computer based calendars. Based on a method for formal specification of some aspects of IMS, namely the structure of the data base, the update operations, and the user dialogue, the paper shows how reasonable procedures for executing IMS operations can be written in the notation of a first-order theory, in such a way that the procedure is a logical consequence of the specification. (Author)

  16. Formal Solutions for Polarized Radiative Transfer. I. The DELO Family

    Energy Technology Data Exchange (ETDEWEB)

    Janett, Gioele; Carlin, Edgar S.; Steiner, Oskar; Belluzzi, Luca, E-mail: gioele.janett@irsol.ch [Istituto Ricerche Solari Locarno (IRSOL), 6605 Locarno-Monti (Switzerland)

    2017-05-10

    The discussion regarding the numerical integration of the polarized radiative transfer equation is still open and the comparison between the different numerical schemes proposed by different authors in the past is not fully clear. Aiming at facilitating the comprehension of the advantages and drawbacks of the different formal solvers, this work presents a reference paradigm for their characterization based on the concepts of order of accuracy , stability , and computational cost . Special attention is paid to understand the numerical methods belonging to the Diagonal Element Lambda Operator family, in an attempt to highlight their specificities.

  17. 20 CFR 702.336 - Formal hearings; new issues.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Formal hearings; new issues. 702.336 Section... Procedures Formal Hearings § 702.336 Formal hearings; new issues. (a) If, during the course of the formal hearing, the evidence presented warrants consideration of an issue or issues not previously considered...

  18. The Interrelatedness of Formal, Non-Formal and Informal Learning: Evidence from Labour Market Program Participants

    Science.gov (United States)

    Cameron, Roslyn; Harrison, Jennifer L.

    2012-01-01

    Definitions, differences and relationships between formal, non-formal and informal learning have long been contentious. There has been a significant change in language and reference from adult education to what amounts to forms of learning categorised by their modes of facilitation. Nonetheless, there is currently a renewed interest in the…

  19. Digital Resource Developments for Mathematics Education Involving Homework across Formal, Non-Formal and Informal Settings

    Science.gov (United States)

    Radovic, Slaviša; Passey, Don

    2016-01-01

    The aim of this paper is to explore further an under-developed area--how drivers of curriculum, pedagogy and assessment conceptions and practices shape the creation and uses of technologically based resources to support mathematics learning across informal, non-formal and formal learning environments. The paper considers: the importance of…

  20. NON-FORMAL EDUCATION, OVEREDUCATION AND WAGES

    OpenAIRE

    SANDRA NIETO; RAÚL RAMOS

    2013-01-01

    Why do overeducated workers participate in non-formal education activities? Do not they suffer from an excess of education? Using microdata from the Spanish sample of the 2007 Adult Education Survey, we have found that overeducated workers participate more than the rest in non-formal education and that they earn higher wages than overeducated workers who did not participate. This result can be interpreted as evidence that non-formal education allows overeducated workers to acquire new abiliti...

  1. Efficient frequent pattern mining algorithm based on node sets in cloud computing environment

    Science.gov (United States)

    Billa, V. N. Vinay Kumar; Lakshmanna, K.; Rajesh, K.; Reddy, M. Praveen Kumar; Nagaraja, G.; Sudheer, K.

    2017-11-01

    The ultimate goal of Data Mining is to determine the hidden information which is useful in making decisions using the large databases collected by an organization. This Data Mining involves many tasks that are to be performed during the process. Mining frequent itemsets is the one of the most important tasks in case of transactional databases. These transactional databases contain the data in very large scale where the mining of these databases involves the consumption of physical memory and time in proportion to the size of the database. A frequent pattern mining algorithm is said to be efficient only if it consumes less memory and time to mine the frequent itemsets from the given large database. Having these points in mind in this thesis we proposed a system which mines frequent itemsets in an optimized way in terms of memory and time by using cloud computing as an important factor to make the process parallel and the application is provided as a service. A complete framework which uses a proven efficient algorithm called FIN algorithm. FIN algorithm works on Nodesets and POC (pre-order coding) tree. In order to evaluate the performance of the system we conduct the experiments to compare the efficiency of the same algorithm applied in a standalone manner and in cloud computing environment on a real time data set which is traffic accidents data set. The results show that the memory consumption and execution time taken for the process in the proposed system is much lesser than those of standalone system.

  2. A Formal Verification Method of Function Block Diagram

    International Nuclear Information System (INIS)

    Koh, Kwang Yong; Seong, Poong Hyun; Jee, Eun Kyoung; Jeon, Seung Jae; Park, Gee Yong; Kwon, Kee Choon

    2007-01-01

    Programmable Logic Controller (PLC), an industrial computer specialized for real-time applications, is widely used in diverse control systems in chemical processing plants, nuclear power plants or traffic control systems. As a PLC is often used to implement safety, critical embedded software, rigorous safety demonstration of PLC code is necessary. Function block diagram (FBD) is a standard application programming language for the PLC and currently being used in the development of a fully-digitalized reactor protection system (RPS), which is called the IDiPS, under the KNICS project. Therefore, verification issue of FBD programs is a pressing problem, and hence is of great importance. In this paper, we propose a formal verification method of FBD programs; we defined FBD programs formally in compliance with IEC 61131-3, and then translate the programs into Verilog model, and finally the model is verified using a model checker SMV. To demonstrate the feasibility and effective of this approach, we applied it to IDiPS which currently being developed under KNICS project. The remainder of this paper is organized as follows. Section 2 briefly describes Verilog and Cadence SMV. In Section 3, we introduce FBD2V which is a tool implemented to support the proposed FBD verification framework. A summary and conclusion are provided in Section 4

  3. Visible light-photocatalysed carbazole synthesis via a formal (4+2) cycloaddition of indole-derived bromides and alkynes.

    Science.gov (United States)

    Yuan, Zhi-Guang; Wang, Qiang; Zheng, Ang; Zhang, Kai; Lu, Liang-Qiu; Tang, Zilong; Xiao, Wen-Jing

    2016-04-14

    We successfully developed an unprecedented route to carbazole synthesis through a visible light-photocatalysed formal (4+2) cycloaddition of indole-derived bromides and alkynes. This novel protocol features extremely mild conditions, a broad substrate scope and high reaction efficiency.

  4. 40 CFR 35.938-4 - Formal advertising.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Formal advertising. 35.938-4 Section 35... advertising. Each contract shall be awarded after formal advertising, unless negotiation is permitted in accordance with § 35.936-18. Formal advertising shall be in accordance with the following: (a) Adequate...

  5. Hospital investment policy in France: pathways to efficiency and the efficiency of the pathways.

    Science.gov (United States)

    Guerrero, Isabelle; Mossé, Philippe R; Rogers, Vaughan

    2009-11-01

    This article examines the ambivalent notion of New Public Management as applied to health policy in France, by investigating the implementation of the efficiency-driven hospital investment plan, Hôpital 2012, conceived at national level, but implemented through regional hospital authorities (ARHs), with formal responsibility for selecting successful funding applications. The methodology combines qualitative and quantitative analysis, in order to highlight and explain discrepancies between goals and results. Despite formal adherence to objective efficiency indicators, certain decisions were based on incomplete information and others on considerations out with initially established criteria. Competition from the private sector was perceived as a threat to public hospitals and the public sector emerged as a major beneficiary of the investment plan. Central ministerial intervention emphasising financial and quantitative considerations led the ARHs to focus more on individual hospital performance than on wider healthcare needs. Data-production became almost an end in itself, threatening to undermine the objectives it sought to pursue. Nonetheless, extended deadlines entailed by ministerial intervention were appropriated as a resource by local actors, leading to ARH decisions which deviated from the official efficiency model, but resulted in increased effectiveness, taking fuller account of local conditions.

  6. Non-Formal education in astronomy: The experience of the University the Carabobo

    Science.gov (United States)

    Falcón, Nelson

    2011-06-01

    Since 1995, the University the Carabobo, in Venezuela, has come developing a program of astronomical popularization and learning Astronomy using the Non formal education methods. A synopsis of the activities is presented. We will also discuss some conceptual aspects about the extension of the knowledge like supplementary function of the investigation and the university teaching. We illustrate the characteristics of the communication with an example of lectures and printed material. The efficiency of the heuristic arguments could be evaluated through a ethnology study. In that order of ideas, we show some images of the activities of astronomical popularization. We can see the population and great concurrence with chronological (and cultural) heterogeneity. We conclude that the Non formal education, structured with characteristic different to the usual educational instruction, constitutes a successful strategy in the diffusion and the communicating astronomy.

  7. Seniority in projection operator formalism

    International Nuclear Information System (INIS)

    Ullah, N.

    1976-01-01

    It is shown that the concept of seniority can be introduced in projection operator formalism through the use of the operator Q, which has been defined by de-Shalit and Talmi. The usefulness of seniority concept in projection operator formalism is discussed. An example of four nucleons in j=3/2 configuration is given for illustrative purposes

  8. A Mathematical Formalization Proposal for Business Growth

    Directory of Open Access Journals (Sweden)

    Gheorghe BAILESTEANU

    2013-01-01

    Full Text Available Economic sciences have known a spectacular evolution in the last century; beginning to use axiomatic methods, applying mathematical instruments as a decision-making tool. The quest to formalization needs to be addressed from various different angles, reducing entry and operating formal costs, increasing the incentives for firms to operate formally, reducing obstacles to their growth, and searching for inexpensive approaches through which to enforce compliancy with government regulations. This paper proposes a formalized approach to business growth, based on mathematics and logics, taking into consideration the particularities of the economic sector.

  9. What Determines Firms’ Decisions to Formalize?

    OpenAIRE

    Neil McCulloch; Günther G. Schulze; Janina Voss

    2010-01-01

    In this paper we analyze the decision of small and micro firms to formalize, i.e. to obtain business and other licenses in rural Indonesia. We use the rural investment climate survey (RICS) that consists of non-farm rural enterprises, most of them microenterprises, and analyze the effect of formalization on tax payments, corruption, access to credit and revenue, taking into account the endogeneity of the formalization decision to such benefits and costs. We show, contrary to most of the liter...

  10. Using Computer Games for Instruction: The Student Experience

    Science.gov (United States)

    Grimley, Michael; Green, Richard; Nilsen, Trond; Thompson, David; Tomes, Russell

    2011-01-01

    Computer games are fun, exciting and motivational when used as leisure pursuits. But do they have similar attributes when utilized for educational purposes? This article investigates whether learning by computer game can improve student experiences compared with a more formal lecture approach and whether computer games have potential for improving…

  11. Sampling efficiency of modified 37-mm sampling cassettes using computational fluid dynamics.

    Science.gov (United States)

    Anthony, T Renée; Sleeth, Darrah; Volckens, John

    2016-01-01

    In the U.S., most industrial hygiene practitioners continue to rely on the closed-face cassette (CFC) to assess worker exposures to hazardous dusts, primarily because ease of use, cost, and familiarity. However, mass concentrations measured with this classic sampler underestimate exposures to larger particles throughout the inhalable particulate mass (IPM) size range (up to aerodynamic diameters of 100 μm). To investigate whether the current 37-mm inlet cap can be redesigned to better meet the IPM sampling criterion, computational fluid dynamics (CFD) models were developed, and particle sampling efficiencies associated with various modifications to the CFC inlet cap were determined. Simulations of fluid flow (standard k-epsilon turbulent model) and particle transport (laminar trajectories, 1-116 μm) were conducted using sampling flow rates of 10 L min(-1) in slow moving air (0.2 m s(-1)) in the facing-the-wind orientation. Combinations of seven inlet shapes and three inlet diameters were evaluated as candidates to replace the current 37-mm inlet cap. For a given inlet geometry, differences in sampler efficiency between inlet diameters averaged less than 1% for particles through 100 μm, but the largest opening was found to increase the efficiency for the 116 μm particles by 14% for the flat inlet cap. A substantial reduction in sampler efficiency was identified for sampler inlets with side walls extending beyond the dimension of the external lip of the current 37-mm CFC. The inlet cap based on the 37-mm CFC dimensions with an expanded 15-mm entry provided the best agreement with facing-the-wind human aspiration efficiency. The sampler efficiency was increased with a flat entry or with a thin central lip adjacent to the new enlarged entry. This work provides a substantial body of sampling efficiency estimates as a function of particle size and inlet geometry for personal aerosol samplers.

  12. Lifelong Learning to Empowerment: Beyond Formal Education

    Science.gov (United States)

    Carr, Alexis; Balasubramanian, K.; Atieno, Rosemary; Onyango, James

    2018-01-01

    This paper discusses the relevance of lifelong learning vis-à-vis the Sustainable Development Goals (SDGs) and stresses the need for an approach blending formal education, non-formal and informal learning. The role of Open and Distance Learning (ODL) in moving beyond formal education and the importance of integrating pedagogy, andragogy and…

  13. Efficient CUDA Polynomial Preconditioned Conjugate Gradient Solver for Finite Element Computation of Elasticity Problems

    Directory of Open Access Journals (Sweden)

    Jianfei Zhang

    2013-01-01

    Full Text Available Graphics processing unit (GPU has obtained great success in scientific computations for its tremendous computational horsepower and very high memory bandwidth. This paper discusses the efficient way to implement polynomial preconditioned conjugate gradient solver for the finite element computation of elasticity on NVIDIA GPUs using compute unified device architecture (CUDA. Sliced block ELLPACK (SBELL format is introduced to store sparse matrix arising from finite element discretization of elasticity with fewer padding zeros than traditional ELLPACK-based formats. Polynomial preconditioning methods have been investigated both in convergence and running time. From the overall performance, the least-squares (L-S polynomial method is chosen as a preconditioner in PCG solver to finite element equations derived from elasticity for its best results on different example meshes. In the PCG solver, mixed precision algorithm is used not only to reduce the overall computational, storage requirements and bandwidth but to make full use of the capacity of the GPU devices. With SBELL format and mixed precision algorithm, the GPU-based L-S preconditioned CG can get a speedup of about 7–9 to CPU-implementation.

  14. Non-Determinism: An Abstract Concept in Computer Science Studies

    Science.gov (United States)

    Armoni, Michal; Gal-Ezer, Judith

    2007-01-01

    Non-determinism is one of the most important, yet abstract, recurring concepts of Computer Science. It plays an important role in Computer Science areas such as formal language theory, computability theory, distributed computing, and operating systems. We conducted a series of studies on the perception of non-determinism. In the current research,…

  15. Formal Symplectic Groupoid of a Deformation Quantization

    Science.gov (United States)

    Karabegov, Alexander V.

    2005-08-01

    We give a self-contained algebraic description of a formal symplectic groupoid over a Poisson manifold M. To each natural star product on M we then associate a canonical formal symplectic groupoid over M. Finally, we construct a unique formal symplectic groupoid ‘with separation of variables’ over an arbitrary Kähler-Poisson manifold.

  16. Formal Modeling and Verification of Interlocking Systems Featuring Sequential Release

    DEFF Research Database (Denmark)

    Vu, Linh Hong; Haxthausen, Anne Elisabeth; Peleska, Jan

    2015-01-01

    In this paper, we present a method and an associated tool suite for formal verification of the new ETCS level 2 based Danish railway interlocking systems. We have made a generic and reconfigurable model of the system behavior and generic high-level safety properties. This model accommodates seque...... SMT based bounded model checking (BMC) and inductive reasoning, we are able to verify the properties for model instances corresponding to railway networks of industrial size. Experiments also show that BMC is efficient for finding bugs in the railway interlocking designs....

  17. Formal Modeling and Verification of Interlocking Systems Featuring Sequential Release

    DEFF Research Database (Denmark)

    Vu, Linh Hong; Haxthausen, Anne Elisabeth; Peleska, Jan

    2014-01-01

    In this paper, we present a method and an associated tool suite for formal verification of the new ETCS level 2 based Danish railway interlocking systems. We have made a generic and reconfigurable model of the system behavior and generic high-level safety properties. This model accommodates seque...... SMT based bounded model checking (BMC) and inductive reasoning, we are able to verify the properties for model instances corresponding to railway networks of industrial size. Experiments also show that BMC is efficient for finding bugs in the railway interlocking designs....

  18. PEAC: A Power-Efficient Adaptive Computing Technology for Enabling Swarm of Small Spacecraft and Deployable Mini-Payloads

    Data.gov (United States)

    National Aeronautics and Space Administration — This task is to develop and demonstrate a path-to-flight and power-adaptive avionics technology PEAC (Power Efficient Adaptive Computing). PEAC will enable emerging...

  19. Formal Vulnerability Assessment of a maritime transportation system

    International Nuclear Information System (INIS)

    Berle, Oyvind; Asbjornslett, Bjorn Egil; Rice, James B.

    2011-01-01

    World trade increasingly relies on longer, larger and more complex supply chains, where maritime transportation is a vital backbone of such operations. Long and complex supply chain systems are more prone to being vulnerable, though through reviews, no specific methods have been found to assess vulnerabilities of a maritime transportation system. Most existing supply chain risk assessment frameworks require risks to be foreseen to be mitigated, rather than giving transportation systems the ability to cope with unforeseen threats and hazards. In assessing cost-efficiency, societal vulnerability versus industrial cost of measures should be included. This conceptual paper presents a structured Formal Vulnerability Assessment (FVA) methodology, seeking to transfer the safety-oriented Formal Safety Assessment (FSA) framework into the domain of maritime supply chain vulnerability. To do so, the following two alterations are made: (1) The focus of the assessment is defined to ensure the ability of the transportation to serve as a throughput mechanism of goods, and to survive and recover from disruptive events. (2) To cope with low-frequency high-impact disruptive scenarios that were not necessarily foreseen, two parallel tracks of risk assessments need to be pursued-the cause-focused risk assessment as in the FSA, and a consequence-focused failure mode approach.

  20. The Fourth Revolution--Computers and Learning.

    Science.gov (United States)

    Bork, Alfred

    The personal computer is sparking a major historical change in the way people learn, a change that could lead to the disappearance of formal education as we know it. The computer can help resolve many of the difficulties now crippling education by enabling expert teachers and curriculum developers to prepare interactive and individualized…

  1. An integrative computational modelling of music structure apprehension

    DEFF Research Database (Denmark)

    Lartillot, Olivier

    2014-01-01

    , the computational model, by virtue of its generality, extensiveness and operationality, is suggested as a blueprint for the establishment of cognitively validated model of music structure apprehension. Available as a Matlab module, it can be used for practical musicological uses.......An objectivization of music analysis requires a detailed formalization of the underlying principles and methods. The formalization of the most elementary structural processes is hindered by the complexity of music, both in terms of profusions of entities (such as notes) and of tight interactions...... between a large number of dimensions. Computational modeling would enable systematic and exhaustive tests on sizeable pieces of music, yet current researches cover particular musical dimensions with limited success. The aim of this research is to conceive a computational modeling of music analysis...

  2. multiPDEVS: A Parallel Multicomponent System Specification Formalism

    Directory of Open Access Journals (Sweden)

    Damien Foures

    2018-01-01

    Full Text Available Based on multiDEVS formalism, we introduce multiPDEVS, a parallel and nonmodular formalism for discrete event system specification. This formalism provides combined advantages of PDEVS and multiDEVS approaches, such as excellent simulation capabilities for simultaneously scheduled events and components able to influence each other using exclusively their state transitions. We next show the soundness of the formalism by giving a construction showing that any multiPDEVS model is equivalent to a PDEVS atomic model. We then present the simulation procedure associated, usually called abstract simulator. As a well-adapted formalism to express cellular automata, we finally propose to compare an implementation of multiPDEVS formalism with a more classical Cell-DEVS implementation through a fire spread application.

  3. Efficiently outsourcing multiparty computation under multiple keys

    NARCIS (Netherlands)

    Peter, Andreas; Tews, Erik; Tews, Erik; Katzenbeisser, Stefan

    2013-01-01

    Secure multiparty computation enables a set of users to evaluate certain functionalities on their respective inputs while keeping these inputs encrypted throughout the computation. In many applications, however, outsourcing these computations to an untrusted server is desirable, so that the server

  4. A Visual Formalism for Interacting Systems

    Directory of Open Access Journals (Sweden)

    Paul C. Jorgensen

    2015-04-01

    Full Text Available Interacting systems are increasingly common. Many examples pervade our everyday lives: automobiles, aircraft, defense systems, telephone switching systems, financial systems, national governments, and so on. Closer to computer science, embedded systems and Systems of Systems are further examples of interacting systems. Common to all of these is that some "whole" is made up of constituent parts, and these parts interact with each other. By design, these interactions are intentional, but it is the unintended interactions that are problematic. The Systems of Systems literature uses the terms "constituent systems" and "constituents" to refer to systems that interact with each other. That practice is followed here. This paper presents a visual formalism, Swim Lane Event-Driven Petri Nets, that is proposed as a basis for Model-Based Testing (MBT of interacting systems. In the absence of available tools, this model can only support the offline form of Model-Based Testing.

  5. Universal Quantum Computing with Arbitrary Continuous-Variable Encoding.

    Science.gov (United States)

    Lau, Hoi-Kwan; Plenio, Martin B

    2016-09-02

    Implementing a qubit quantum computer in continuous-variable systems conventionally requires the engineering of specific interactions according to the encoding basis states. In this work, we present a unified formalism to conduct universal quantum computation with a fixed set of operations but arbitrary encoding. By storing a qubit in the parity of two or four qumodes, all computing processes can be implemented by basis state preparations, continuous-variable exponential-swap operations, and swap tests. Our formalism inherits the advantages that the quantum information is decoupled from collective noise, and logical qubits with different encodings can be brought to interact without decoding. We also propose a possible implementation of the required operations by using interactions that are available in a variety of continuous-variable systems. Our work separates the "hardware" problem of engineering quantum-computing-universal interactions, from the "software" problem of designing encodings for specific purposes. The development of quantum computer architecture could hence be simplified.

  6. The role of formal specifications

    International Nuclear Information System (INIS)

    McHugh, J.

    1994-01-01

    The role of formal requirements specification is discussed under the premise that the primary purpose of such specifications is to facilitate clear and unambiguous communications among the communities of interest for a given project. An example is presented in which the failure to reach such an understanding resulted in an accident at a chemical plant. Following the example, specification languages based on logical formalisms and notations are considered. These are rejected as failing to serve the communications needs of diverse communities. The notion of a specification as a surrogate for a program is also considered and rejected. The paper ends with a discussion of the type of formal notation that will serve the communications role and several encouraging developments are noted

  7. Quantum formalism for classical statistics

    Science.gov (United States)

    Wetterich, C.

    2018-06-01

    In static classical statistical systems the problem of information transport from a boundary to the bulk finds a simple description in terms of wave functions or density matrices. While the transfer matrix formalism is a type of Heisenberg picture for this problem, we develop here the associated Schrödinger picture that keeps track of the local probabilistic information. The transport of the probabilistic information between neighboring hypersurfaces obeys a linear evolution equation, and therefore the superposition principle for the possible solutions. Operators are associated to local observables, with rules for the computation of expectation values similar to quantum mechanics. We discuss how non-commutativity naturally arises in this setting. Also other features characteristic of quantum mechanics, such as complex structure, change of basis or symmetry transformations, can be found in classical statistics once formulated in terms of wave functions or density matrices. We construct for every quantum system an equivalent classical statistical system, such that time in quantum mechanics corresponds to the location of hypersurfaces in the classical probabilistic ensemble. For suitable choices of local observables in the classical statistical system one can, in principle, compute all expectation values and correlations of observables in the quantum system from the local probabilistic information of the associated classical statistical system. Realizing a static memory material as a quantum simulator for a given quantum system is not a matter of principle, but rather of practical simplicity.

  8. Bridging In-school and Out-of-school Learning: Formal, Non-Formal, and Informal Education

    Science.gov (United States)

    Eshach, Haim

    2007-04-01

    The present paper thoroughly examines how one can effectively bridge in-school and out-of-school learning. The first part discusses the difficulty in defining out-of-school learning. It proposes to distinguish three types of learning: formal, informal, and non-formal. The second part raises the question of whether out-of-school learning should be dealt with in the in-school system, in view of the fact that we experience informal learning anyway as well as considering the disadvantages and difficulties teachers are confronted with when planning and carrying out scientific fieldtrips. The voices of the teachers, the students, and the non-formal institution staff are heard to provide insights into the problem. The third part discusses the cognitive and affective aspects of non-formal learning. The fourth part presents some models explaining scientific fieldtrip learning and based on those models, suggests a novel explanation. The fifth part offers some recommendations of how to bridge in and out-of-school learning. The paper closes with some practical ideas as to how one can bring the theory described in the paper into practice. It is hoped that this paper will provide educators with an insight so that they will be able to fully exploit the great potential that scientific field trips may offer.

  9. Numerical aspects for efficient welding computational mechanics

    Directory of Open Access Journals (Sweden)

    Aburuga Tarek Kh.S.

    2014-01-01

    Full Text Available The effect of the residual stresses and strains is one of the most important parameter in the structure integrity assessment. A finite element model is constructed in order to simulate the multi passes mismatched submerged arc welding SAW which used in the welded tensile test specimen. Sequentially coupled thermal mechanical analysis is done by using ABAQUS software for calculating the residual stresses and distortion due to welding. In this work, three main issues were studied in order to reduce the time consuming during welding simulation which is the major problem in the computational welding mechanics (CWM. The first issue is dimensionality of the problem. Both two- and three-dimensional models are constructed for the same analysis type, shell element for two dimension simulation shows good performance comparing with brick element. The conventional method to calculate residual stress is by using implicit scheme that because of the welding and cooling time is relatively high. In this work, the author shows that it could use the explicit scheme with the mass scaling technique, and time consuming during the analysis will be reduced very efficiently. By using this new technique, it will be possible to simulate relatively large three dimensional structures.

  10. Towards Formal Validation of Trust and Security of the Internet of Services

    DEFF Research Database (Denmark)

    Carbone, Roberto; Minea, Marius; Mödersheim, Sebastian Alexander

    2011-01-01

    Service designers and developers, while striving to meet the requirements posed by application scenarios, have a hard time to assess the trust and security impact of an option, a minor change, a combination of functionalities, etc., due to the subtle and unforeseeable situations and behaviors...... techniques to efficiently tackle industrial-size problems. The formal verification of trust and security of the Internet of Services will significantly boost its development and public acceptance....

  11. Toward a formal ontology for narrative

    Directory of Open Access Journals (Sweden)

    Ciotti, Fabio

    2016-03-01

    Full Text Available In this paper the rationale and the first draft of a formal ontology for modeling narrative texts are presented. Building on the semiotic and structuralist narratology, and on the work carried out in the late 1980s by Giuseppe Gigliozzi in Italy, the focus of my research are the concepts of character and of narrative world/space. This formal model is expressed in the OWL 2 ontology language. The main reason to adopt a formal modeling approach is that I consider the purely probabilistic-quantitative methods (now widespread in digital literary studies inadequate. An ontology, on one hand provides a tool for the analysis of strictly literary texts. On the other hand (though beyond the scope of the present work, its formalization can also represent a significant contribution towards grounding the application of storytelling methods outside of scholarly contexts.

  12. Industrial Practice in Formal Methods : A Review

    DEFF Research Database (Denmark)

    Bicarregui, Juan C.; Fitzgerald, John; Larsen, Peter Gorm

    2009-01-01

    We examine the the industrial application of formal methods using data gathered in a review of 62 projects taking place over the last 25 years. The review suggests that formal methods are being applied in a wide range of application domains, with increasingly strong tool support. Significant chal...... challenges remain in providing usable tools that can be integrated into established development processes; in education and training; in taking formal methods from first use to second use, and in gathering and evidence to support informed selection of methods and tools.......We examine the the industrial application of formal methods using data gathered in a review of 62 projects taking place over the last 25 years. The review suggests that formal methods are being applied in a wide range of application domains, with increasingly strong tool support. Significant...

  13. Computationally Efficient Nonlinear Bell Inequalities for Quantum Networks

    Science.gov (United States)

    Luo, Ming-Xing

    2018-04-01

    The correlations in quantum networks have attracted strong interest with new types of violations of the locality. The standard Bell inequalities cannot characterize the multipartite correlations that are generated by multiple sources. The main problem is that no computationally efficient method is available for constructing useful Bell inequalities for general quantum networks. In this work, we show a significant improvement by presenting new, explicit Bell-type inequalities for general networks including cyclic networks. These nonlinear inequalities are related to the matching problem of an equivalent unweighted bipartite graph that allows constructing a polynomial-time algorithm. For the quantum resources consisting of bipartite entangled pure states and generalized Greenberger-Horne-Zeilinger (GHZ) states, we prove the generic nonmultilocality of quantum networks with multiple independent observers using new Bell inequalities. The violations are maximal with respect to the presented Tsirelson's bound for Einstein-Podolsky-Rosen states and GHZ states. Moreover, these violations hold for Werner states or some general noisy states. Our results suggest that the presented Bell inequalities can be used to characterize experimental quantum networks.

  14. Encoding neural and synaptic functionalities in electron spin: A pathway to efficient neuromorphic computing

    Science.gov (United States)

    Sengupta, Abhronil; Roy, Kaushik

    2017-12-01

    Present day computers expend orders of magnitude more computational resources to perform various cognitive and perception related tasks that humans routinely perform every day. This has recently resulted in a seismic shift in the field of computation where research efforts are being directed to develop a neurocomputer that attempts to mimic the human brain by nanoelectronic components and thereby harness its efficiency in recognition problems. Bridging the gap between neuroscience and nanoelectronics, this paper attempts to provide a review of the recent developments in the field of spintronic device based neuromorphic computing. Description of various spin-transfer torque mechanisms that can be potentially utilized for realizing device structures mimicking neural and synaptic functionalities is provided. A cross-layer perspective extending from the device to the circuit and system level is presented to envision the design of an All-Spin neuromorphic processor enabled with on-chip learning functionalities. Device-circuit-algorithm co-simulation framework calibrated to experimental results suggest that such All-Spin neuromorphic systems can potentially achieve almost two orders of magnitude energy improvement in comparison to state-of-the-art CMOS implementations.

  15. Computer Controlled Portable Greenhouse Climate Control System for Enhanced Energy Efficiency

    Science.gov (United States)

    Datsenko, Anthony; Myer, Steve; Petties, Albert; Hustek, Ryan; Thompson, Mark

    2010-04-01

    This paper discusses a student project at Kettering University focusing on the design and construction of an energy efficient greenhouse climate control system. In order to maintain acceptable temperatures and stabilize temperature fluctuations in a portable plastic greenhouse economically, a computer controlled climate control system was developed to capture and store thermal energy incident on the structure during daylight periods and release the stored thermal energy during dark periods. The thermal storage mass for the greenhouse system consisted of a water filled base unit. The heat exchanger consisted of a system of PVC tubing. The control system used a programmable LabView computer interface to meet functional specifications that minimized temperature fluctuations and recorded data during operation. The greenhouse was a portable sized unit with a 5' x 5' footprint. Control input sensors were temperature, water level, and humidity sensors and output control devices were fan actuating relays and water fill solenoid valves. A Graphical User Interface was developed to monitor the system, set control parameters, and to provide programmable data recording times and intervals.

  16. Balancing creativity and time efficiency in multi-team R&D projects : the alignment of formal and informal networks

    NARCIS (Netherlands)

    Kratzer, Jan; Gemuenden, Hans Georg; Lettl, Christopher

    2008-01-01

    The business world is denoted by an increasing number of multi-team research and development (R&D) projects, however, managerial knowledge about how to run them successfully is scarce. The present study attempts to shed light at this kind of projects by investigating the alignment of formal and

  17. Formalization of the Resolution Calculus for First-Order Logic

    DEFF Research Database (Denmark)

    Schlichtkrull, Anders

    2016-01-01

    A formalization in Isabelle/HOL of the resolution calculus for first-order logic is presented. Its soundness and completeness are formally proven using the substitution lemma, semantic trees, Herbrand’s theorem, and the lifting lemma. In contrast to previous formalizations of resolution, it consi......A formalization in Isabelle/HOL of the resolution calculus for first-order logic is presented. Its soundness and completeness are formally proven using the substitution lemma, semantic trees, Herbrand’s theorem, and the lifting lemma. In contrast to previous formalizations of resolution...

  18. Use of Debye's series to determine the optimal edge-effect terms for computing the extinction efficiencies of spheroids.

    Science.gov (United States)

    Lin, Wushao; Bi, Lei; Liu, Dong; Zhang, Kejun

    2017-08-21

    The extinction efficiencies of atmospheric particles are essential to determining radiation attenuation and thus are fundamentally related to atmospheric radiative transfer. The extinction efficiencies can also be used to retrieve particle sizes or refractive indices through particle characterization techniques. This study first uses the Debye series to improve the accuracy of high-frequency extinction formulae for spheroids in the context of Complex angular momentum theory by determining an optimal number of edge-effect terms. We show that the optimal edge-effect terms can be accurately obtained by comparing the results from the approximate formula with their counterparts computed from the invariant imbedding Debye series and T-matrix methods. An invariant imbedding T-matrix method is employed for particles with strong absorption, in which case the extinction efficiency is equivalent to two plus the edge-effect efficiency. For weakly absorptive or non-absorptive particles, the T-matrix results contain the interference between the diffraction and higher-order transmitted rays. Therefore, the Debye series was used to compute the edge-effect efficiency by separating the interference from the transmission on the extinction efficiency. We found that the optimal number strongly depends on the refractive index and is relatively insensitive to the particle geometry and size parameter. By building a table of optimal numbers of edge-effect terms, we developed an efficient and accurate extinction simulator that has been fully tested for randomly oriented spheroids with various aspect ratios and a wide range of refractive indices.

  19. Applying a Global Sensitivity Analysis Workflow to Improve the Computational Efficiencies in Physiologically-Based Pharmacokinetic Modeling

    Directory of Open Access Journals (Sweden)

    Nan-Hung Hsieh

    2018-06-01

    Full Text Available Traditionally, the solution to reduce parameter dimensionality in a physiologically-based pharmacokinetic (PBPK model is through expert judgment. However, this approach may lead to bias in parameter estimates and model predictions if important parameters are fixed at uncertain or inappropriate values. The purpose of this study was to explore the application of global sensitivity analysis (GSA to ascertain which parameters in the PBPK model are non-influential, and therefore can be assigned fixed values in Bayesian parameter estimation with minimal bias. We compared the elementary effect-based Morris method and three variance-based Sobol indices in their ability to distinguish “influential” parameters to be estimated and “non-influential” parameters to be fixed. We illustrated this approach using a published human PBPK model for acetaminophen (APAP and its two primary metabolites APAP-glucuronide and APAP-sulfate. We first applied GSA to the original published model, comparing Bayesian model calibration results using all the 21 originally calibrated model parameters (OMP, determined by “expert judgment”-based approach vs. the subset of original influential parameters (OIP, determined by GSA from the OMP. We then applied GSA to all the PBPK parameters, including those fixed in the published model, comparing the model calibration results using this full set of 58 model parameters (FMP vs. the full set influential parameters (FIP, determined by GSA from FMP. We also examined the impact of different cut-off points to distinguish the influential and non-influential parameters. We found that Sobol indices calculated by eFAST provided the best combination of reliability (consistency with other variance-based methods and efficiency (lowest computational cost to achieve convergence in identifying influential parameters. We identified several originally calibrated parameters that were not influential, and could be fixed to improve computational

  20. Efficient Algorithms for Computing the Triplet and Quartet Distance Between Trees of Arbitrary Degree

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf; Mailund, Thomas

    2013-01-01

    ), respectively, and counting how often the induced topologies in the two input trees are different. In this paper we present efficient algorithms for computing these distances. We show how to compute the triplet distance in time O(n log n) and the quartet distance in time O(d n log n), where d is the maximal......The triplet and quartet distances are distance measures to compare two rooted and two unrooted trees, respectively. The leaves of the two trees should have the same set of n labels. The distances are defined by enumerating all subsets of three labels (triplets) and four labels (quartets...... degree of any node in the two trees. Within the same time bounds, our framework also allows us to compute the parameterized triplet and quartet distances, where a parameter is introduced to weight resolved (binary) topologies against unresolved (non-binary) topologies. The previous best algorithm...